Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
32 - 37 Lacs
Bengaluru
Work from Office
About The Role : Job Title- Data Science Engineer, VP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 13+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 4 days ago
3.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Experience:- Overall IT experience (No of years) - 7+- Data Modeling Experience - 3+- Data Vault Modeling Experience - 2+ Key Responsibilities:- Drive discussions with clients deal teams to understand business requirements, how Data Model fits in implementation and solutioning- Develop the solution blueprint and scoping, estimation in delivery project and solutioning- Drive Discovery activities and design workshops with client and lead strategic road mapping and operating model design discussions- Design and develop Data Vault 2.0-compliant models, including Hubs, Links, and Satellites.- Design and develop Raw Data Vault and Business Data Vault.- Translate business requirements into conceptual, logical, and physical data models.- Work with source system analysts to understand data structures and lineage.- Ensure conformance to data modeling standards and best practices.- Collaborate with ETL/ELT developers to implement data models in a modern data warehouse environment (e.g., Snowflake, Databricks, Redshift, BigQuery).- Document models, data definitions, and metadata. Technical Experience:Good to have Skills: - 7+ year overall IT experience, 3+ years in Data Modeling and 2+ years in Data Vault Modeling- Design and development of Raw Data Vault and Business Data Vault.- Strong understanding of Data Vault 2.0 methodology, including business keys, record tracking, and historical tracking.- Data modeling experience in Dimensional Modeling/3-NF modeling- Hands-on experience with any data modeling tools (e.g., ER/Studio, ERwin, or similar).- Solid understanding of ETL/ELT processes, data integration, and warehousing concepts.- Experience with any modern cloud data platforms (e.g., Snowflake, Databricks, Azure Synapse, AWS Redshift, or Google BigQuery).- Excellent SQL skills. Good to Have Skills: - Any one of these add-on skills - Graph Database Modelling, RDF, Document DB Modeling, Ontology, Semantic Data Modeling- Hands-on experience in any Data Vault automation tool (e.g., VaultSpeed, WhereScape, biGENIUS-X, dbt, or similar).- Preferred understanding of Data Analytics on Cloud landscape and Data Lake design knowledge.- Cloud Data Engineering, Cloud Data Integration Professional Experience:- Strong requirement analysis and technical solutioning skill in Data and Analytics - Excellent writing, communication and presentation skills.- Eagerness to learn new skills and develop self on an ongoing basis.- Good client facing and interpersonal skills Educational Qualification:- B.E or B.Tech must Qualification 15 years full time education
Posted 4 days ago
3.0 - 7.0 years
25 - 34 Lacs
Pune
Work from Office
Backend Engineer (Python, Node Js, Agentic AI) Exp : 3 - 7 yrs Location: Pune Min 3 yrs exp in Backend developer exposure to AI/ML Min 2+ yrs exp in backend development using Python and Node.js For JD, write to amit.gaine@talentxo.in
Posted 4 days ago
9.0 - 12.0 years
27 - 35 Lacs
Chennai, Bengaluru
Work from Office
Role and Responsibilities: Talk to client stakeholders, and understand the requirements for building their Business Intelligence dashboards and reports Design, develop, and maintain Power BI reports and dashboards for business users. Translate business requirements into effective visualizations using various data sources Create data models, DAX calculations, and custom measures to support business analytics needs Optimize performance and ensure data accuracy in Power BI reports. Troubleshoot and resolve issues related to transformations and visualizations. Train end-users on using Power BI for self-service analytics. Skills Required: Proficiency in Power BI Desktop and Power BI Service. Good understanding of Power BI Copilot. Strong understanding of data modelling concepts and DAX language. Strong understanding of semantic data modelling concepts. Experience with data visualization best practices. Experience in working with streaming data as well as batch data. Knowledge in ADF would be added advantage. Knowledge in SAS would be added advantage.
Posted 2 weeks ago
8.0 - 10.0 years
15 - 20 Lacs
Hyderabad
Remote
US Shifts(night shift) 8+ yrs in Data Engineering, expert in ADF, SQL & Power BI (DAX, Star schema, dashboards, semantic models). Strong in data modeling (manufacturing), Python, and API dev. GE PPA exp a plus.
Posted 2 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Hybrid
We are looking for a passionate Search Specialist Backend Engineer to join our team. This role will focus on improving and optimizing our search capabilities to enhance user experience, scalability, and relevancy. Responsibilities: Design, develop, and maintain the search application, ensuring performance, and scalability. Collaborate with cross-functional teams to define and implement search features and improvements. Ensure search results are relevant by employing techniques like ranking, personalization, and recommendation. Work on complex problems related to search algorithms, data structures, and distributed systems. Implement logging, metrics, and monitoring for search services. Optimize search by tuning the underlying algorithms, experimenting with new techniques, and leveraging tools like Elasticsearch, Solr, etc. Maintain and improve existing search functionalities while ensuring backward compatibility. Stay updated with the latest advancements in search technology and industry best practices. Basic Qualifications: Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Experience with search engines like Elasticsearch, Solr, or similar technologies. Solid understanding of algorithms, data structures, and distributed systems. Proficiency in Python and Django. Familiarity with RESTful APIs and backend services. Preferred Qualifications: Experience with natural language processing (NLP) or machine learning as applied to search. Knowledge of various search relevance techniques and ranking algorithms. Experience in a cloud environment (e.g., AWS, Google Cloud, Azure). Familiarity with containerization technologies such as Docker and Kubernetes. Strong analytical and debugging skills. Personal Attributes: Strong communication skills and ability to collaborate effectively in a team setting. A keen interest in improving user experience through search. Proactive, self-motivated, and able to work in a fast-paced environment.
Posted 2 weeks ago
5.0 - 7.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 5–7 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills. Roles and Responsibilities Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 5–7 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. Whats in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. Its a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data / software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management) S&P Global Enterprise Data Organization is a unified, cross-divisional team focused on transforming S&P Globals data assets. We streamline processes and enhance collaboration by integrating diverse datasets with advanced technologies, ensuring efficient data governance and management. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visit Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ---- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ---- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), DTMGOP103.2 - Middle Management Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 2 weeks ago
14.0 - 19.0 years
25 - 27 Lacs
Hyderabad
Work from Office
Overview As a Visualization Platform Architect, you will be a key techno-functional expert leading and overseeing PepsiCo's data visualization platforms and operations. You will drive a strong vision for how visualization platforms can proactively create a positive impact on the business. You'll be an empowered leader of a team of visualization engineers who build platform products for visualization optimization, cost efficiency, and tools for BI operations on the PepsiCo Data Lake. You will enable analytics, business intelligence, and data exploration efforts across the company. As the leader of the visualization platform team, you will help in managing governance frameworks to ensure best practices in visualization platforms for large and complex data applications in public cloud environments. You will work closely with process owners, product owners, and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of visualization platforms and services. Manage and scale Azure-based visualization platforms to support new product launches and ensure platform stability and observability across BI products. Build and own the automation and monitoring frameworks that capture metrics and operational KPIs for visualization platforms for cost and performance. Implement best practices for system integration, security, performance tuning, and platform management in BI tools. Empower the business by creating value through the increased adoption of data visualization and business intelligence landscapes. Collaborate with internal clients (data science and product teams) to drive solutioning and proof-of-concept (PoC) discussions. Advance the architectural maturity of visualization platforms by engaging with enterprise architects and strategic internal and external partners. Define and manage SLAs for visualization platforms and processes running in production. Support large-scale experimentation in data visualization and dashboarding. Prototype new approaches and build scalable visualization solutions. Research and implement state-of-the-art methodologies in data visualization. Document learnings, best practices, and knowledge transfer strategies. Create and audit reusable templates, dashboards, and libraries for BI tools. Qualifications 14+ years of overall technology experience, including at least 4+ years of hands-on experience in visualization platform architecture, program management, and advanced analytics. 6+ years of experience with data visualization tools such as Power BI, Tableau, and Looker. Strong expertise in visualization platform optimization and performance tuning. Experience in managing multiple teams and collaborating with different stakeholders to implement the team's vision. Fluent with Azure cloud services. Azure Certification is a plus. Experience integrating multi-cloud services with on-premises visualization platforms. Expertise in data modeling , data warehousing, and building BI semantic models. Proficient in DAX queries, Copilot, and AI-powered visualization tools. Experience building and managing highly available, distributed BI platforms. Hands-on experience with version control systems (GitHub) and deployment & CI/CD tools. Knowledge of Azure Data Factory, Azure Synapse, and Azure Databricks. Experience with statistical and ML-driven visualization techniques is a plus. Experience with visualization solutions in retail or supply chain is advantageous. Understanding of metadata management, data lineage, and BI governance frameworks. Working knowledge of agile methodologies, including DevOps and DataOps concepts. Familiarity with augmented analytics tools (e.g., ThoughtSpot, Tellius) is a plus.
Posted 2 weeks ago
12.0 - 17.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Overview As a Visualization Platform Architect, you will be a key techno-functional expert leading and overseeing PepsiCo's data visualization platforms and operations. You will drive a strong vision for how visualization platforms can proactively create a positive impact on the business. You'll be an empowered leader of a team of visualization engineers who build platform products for visualization optimization, cost efficiency, and tools for BI operations on the PepsiCo Data Lake. You will enable analytics, business intelligence, and data exploration efforts across the company. As the leader of the visualization platform team, you will help in managing governance frameworks to ensure best practices in visualization platforms for large and complex data applications in public cloud environments. You will work closely with process owners, product owners, and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of visualization platforms and services. Manage and scale Azure-based visualization platforms to support new product launches and ensure platform stability and observability across BI products. Build and own the automation and monitoring frameworks that capture metrics and operational KPIs for visualization platforms for cost and performance. Implement best practices for system integration, security, performance tuning, and platform management in BI tools. Empower the business by creating value through the increased adoption of data visualization and business intelligence landscapes. Collaborate with internal clients (data science and product teams) to drive solutioning and proof-of-concept (PoC) discussions. Advance the architectural maturity of visualization platforms by engaging with enterprise architects and strategic internal and external partners. Define and manage SLAs for visualization platforms and processes running in production. Support large-scale experimentation in data visualization and dashboarding. Prototype new approaches and build scalable visualization solutions. Research and implement state-of-the-art methodologies in data visualization. Document learnings, best practices, and knowledge transfer strategies. Create and audit reusable templates, dashboards, and libraries for BI tools. Qualifications 12+ years of overall technology experience, including at least 4+ years of hands-on experience in visualization platform architecture, program management, and advanced analytics. 6+ years of experience with data visualization tools such as Power BI, Tableau, and Looker. Strong expertise in visualization platform optimization and performance tuning. Experience in managing multiple teams and collaborating with different stakeholders to implement the team's vision. Fluent with Azure cloud services. Azure Certification is a plus. Experience integrating multi-cloud services with on-premises visualization platforms. Expertise in data modeling , data warehousing, and building BI semantic models. Proficient in DAX queries, Copilot, and AI-powered visualization tools. Experience building and managing highly available, distributed BI platforms. Hands-on experience with version control systems (GitHub) and deployment & CI/CD tools. Knowledge of Azure Data Factory, Azure Synapse, and Azure Databricks. Experience with statistical and ML-driven visualization techniques is a plus. Experience with visualization solutions in retail or supply chain is advantageous. Understanding of metadata management, data lineage, and BI governance frameworks. Working knowledge of agile methodologies, including DevOps and DataOps concepts. Familiarity with augmented analytics tools (e.g., ThoughtSpot, Tellius) is a plus.
Posted 2 weeks ago
10.0 - 15.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Position Overview As an Engineering Manager, you will lead a team of software engineers in building scalable, reliable, and efficient web applications and microservices in the for “ News and Competitive Data Analysis Platform ” . You will drive technical excellence, system architecture, and best practices while fostering a high-performance engineering culture. You’ll be responsible for managing engineering execution, mentoring engineers, and ensuring the timely delivery of high-quality solutions. Your expertise in Python , Django, React, Apache Solr , RabbitMQ, and Postges and other NoSQL cloud databases will help shape the technical strategy of our SaaS platform. You will collaborate closely with Product, Design, and DevOps teams to align engineering efforts with business goals. Key Responsibilities: Lead the design and development of internal platforms that empower product teams to build, deploy, and scale services seamlessly. Integrate AI/ML capabilities into platform tools—such as semantic search, intelligent alerting, auto-scaling, and workflow automation. Optimize distributed backend systems (using Celery, RabbitMQ) for efficiency, reliability, and performance across tasks like crawling, processing, and notifications. Collaborate closely with DevOps, SRE, data, and ML teams to build secure, observable, and scalable infrastructure across AWS and GCP. Drive cloud modernization, including the strategic migration from GCP to AWS, standardizing on containerization and CI/CD best practices. Foster a culture of platform ownership, engineering excellence, and continuous improvement across tooling, monitoring, and reliability. Mentor engineers, enabling them to grow technically while influencing platform architecture and cross-team impact. Required Experience/Skills : 8+ years of experience in backend or platform engineering, with 2+ years in a technical leadership role. Strong hands-on experience with Python (Django, Celery), React, and building scalable distributed systems. Deep knowledge of message brokers (RabbitMQ), PostgreSQL, and search technologies like Apache Solr or Elasticsearch. Exposure to AI/ML technologies—such as NLP, semantic search, LLMs, or vector databases (e.g., FAISS, Pinecone). Experience with CI/CD pipelines, container orchestration (Docker, Kubernetes), and observability tools. Proven ability to lead cloud migration initiatives and manage infrastructure across AWS/GCP. A platform-first mindset—focused on developer productivity, reusability, scalability, and system performance.
Posted 2 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
: Job Title- Data Science Engineer, AVP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 8+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 weeks ago
1.0 - 3.0 years
1 - 1 Lacs
Tiruchirapalli
Work from Office
Role: EPUB Accessibility Type: Fulltime Experience: 1+ years in EPUB accessibility project Skills EPUB 3.0/WCAG 2.1 Assistive Technology Understanding Semantic Tagging DAISY / Ace by DAISY / SMART validation tools Image description and Alt text writing Provident fund
Posted 3 weeks ago
15.0 - 20.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Data & AI Strategy Good to have skills : NAMinimum 18 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve collaborating with cross-functional teams to design and implement production-ready solutions, ensuring that the applications meet high-quality standards. You will also explore the integration of generative AI models into various projects, contributing to innovative solutions that enhance user experiences and operational efficiencies. Your role will require a blend of technical expertise and creative problem-solving to address complex challenges in the AI landscape. Lead the AI delivery and responsible for successful value delivery. Roles & Responsibilities:- AI/Gen AI Architecture patterns, Agentic Delivery Lifecycle, Method 1- AI/Agentic AI, Accentures solutioning process, Agile Delivery- AI & Gen AI architecture competency (RAG, agent orchestration, model serving patterns)- Experience in data management, data architecture- Decent engineering background and should have executed Data & AI programs. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data & AI Strategy- Strong understanding of Enterprise Data Strategy and aligning data initiatives with digital transformation goals- Hands-on expertise in Data Mesh or Data Fabric architecture and managing Data-as-a-Product across decentralized domains- Familiarity with Generative AI and Agentic Systems, and preparing data pipelines that support AI/ML use cases (e.g., RAG, fine-tuning, prompt engineering)- Strong knowledge of Semantic Layers- Understanding of Data Privacy, Confidentiality- Demonstrated experience in designing Enterprise Data Architectures, including lakehouse, warehouse, and federated models- Capability to define, build, and manage scalable Data Products aligned with business value and user needs- Experience in orchestrating Data Delivery & Consumption models, enabling self-service analytics and reusable data assets across business units- Experience leading large scale software programs with delivery accountability- Agile / Scrum for iterative value delivery and release planning- Strong stakeholder & client management- Experience running multi disciplinary teams (engineering, DevOps, QA) across geographies- Budget & KPI ownership:tracking value realization, burn up charts, and delivery metrics- Experience working with industry-specific data models (e.g., in Life Sciences, Financial Services, or Manufacturing)- Experiene on design and implementation of Enterprise Knowledge Graphs- Understanding of Zero Trust architectures and confidential computing techniques for secure data access.- Familiarity with solutioning / proposal processes (estimates, SOWs, win-themes)- Expertise scaling programs on Azure, AWS, or GCP and managing GPU/PTU capacity- Experience working on NLP/ML/AI program Additional Information:- The candidate should have minimum 18 years of experience in Data & AI Strategy.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
0.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Primary Skills: Semantic Kernel and Azure AI AgentExp min 9 month of real project experience Semantic Kernel experiencefocuses on building intelligent agents and applications by leveraging the Semantic Kernel SDK, which is an open-source framework from Microsoft that allows for the integration of AI models and services. Secondary Skills: This role involves designing and developing AI solutions, including copilots, and integrating them with enterprise data and systems using tools like Azure OpenAI, Azure AI Search
Posted 1 month ago
2.0 - 7.0 years
2 - 4 Lacs
Bengaluru
Work from Office
Role & responsibilities Preferred candidate profile Image, Video and text annotators contribute to the development of AI and machine learning systems by providing accurately labeled datasets, enabling the training and evaluation of algorithms for various applications such as object detection, image classification, semantic segmentation, and more. Should have two years of experience in labeling or annotating various elements within images/videos or visual data Should have annotated temporal information within videos, such as tracking the movement or trajectory of objects, identifying key frames, or annotating specific actions or events occurring over time. Should have annotated images by marking and labeling specific objects, regions, or features within the images. This may involve drawing bounding boxes around objects, highlighting points of interest, or segmenting objects using polygons or masks Ensure the quality and accuracy of annotated data by reviewing and verifying annotations for correctness and consistency Follow and adhere to specific annotation guidelines, instructions, or specifications provided by the project or company. This includes understanding and applying domain-specific knowledge, as well as adapting to evolving requirements. Collaborate with team members, project managers, or stakeholders to address any questions, concerns, or issues related to the annotation process. Effective communication is essential for ensuring consistent and accurate annotations. Experience working on complex annotation tasks including 3D lidar labeling. Excellent written and verbal communication skills to convey technical challenges and updates. Report any data quality issues or tool-related challenges to the project leads in a timely manner. Client-Oracle Location- Bangalore.
Posted 1 month ago
5.0 - 10.0 years
6 - 11 Lacs
Hyderabad
Work from Office
8-10 years of IT experience with several years in hands on Application Development using Microsoft technologies (.Net) Experience in developing applications using modern Architecture such as API/Microservices and Migrating monolith to more modern architectures Experience in developing solution using public cloud platform like Azure, Experience in developing applications using latest .Net frameworks like .Net 4.8.1, and should be hands on with .Net 7.0 and 8.0, .Net Core 1.0, 2.0 and .Net Standard Experience in developing REST API using .Net WebAPI, .Net Core latest version Experience in Azure functions and Azure Services like Function Apps, WebJobs, Logic Apps etc Knowledge in semantic kernel is plus Experience in CosmosDB Knowledge in developing applications using mordent front end technologies like AngularJS, ReactJS, HTML5, CSS3 etc.. Familiarity with cloud native application architecture patterns including containers, functional computing, batch processing Experience in conducting code reviews and define best practices for team to produce good code Excellent oral and written communication skills Experience in Leading the team, and excellent stakeholder management skills Strong working knowledge in Agile Oversight experience on transformation projects and successful transitions to implementation support teams Presentation skills with a high degree of comfort with both large and small audiences Expertise in modernizing traditional NTier applications to Azure and modernizing application code in Azure Knowledge of Azure DevOps(VSTS) Continuous Integration / Continuous Delivery (CI/CD) implementing optimized development processes is a plus Experience in code reviews and define best practices for team to produce good code
Posted 1 month ago
7.0 - 12.0 years
6 - 10 Lacs
Hyderabad
Work from Office
8-10 years of IT experience with several years in hands on Application Development using Microsoft technologies (.Net) Experience in developing applications using modern Architecture such as API/Microservices and Migrating monolith to more modern architectures Experience in developing solution using public cloud platform like Azure, Experience in developing applications using latest .Net frameworks like .Net 4.8.1, and should be hands on with .net 7.0 and 8.0, .Net Core 1.0, 2.0 and .Net Standard Experience in developing REST API using .Net WebAPI, .Net Core latest version Experience in Azure functions and Azure Services like Function Apps, WebJobs, Logic Apps etc Knowledge in semantic kernel is plus Experience in CosmosDB Knowledge in developing applications using mordent front end technologies like AngularJS, ReactJS, HTML5, CSS3 etc.. Experience in conducting code reviews and define best practices for team to produce good code Excellent oral and written communication skills Experience in Leading the team, and excellent stakeholder management skills Strong working knowledge in Agile Oversight experience on transformation projects and successful transitions to implementation support teams Presentation skills with a high degree of comfort with both large and small audiences Expertise in modernizing traditional NTier applications to Azure and modernizing application code in Azure Knowledge of Azure DevOps(VSTS) Continuous Integration / Continuous Delivery (CI/CD) implementing optimized development processes is a plus
Posted 1 month ago
6.0 - 11.0 years
8 - 18 Lacs
Hyderabad
Work from Office
Type : Contract Description Extensive experience in Power BI Visualization , Data Modeling and Semantic layer implementation Experience in Tabular model implementation Experience in Client interactions and requirement analysis. Excellent communication and comprehension skills. Experience in requirement documentation. Excellent presentation and coordination skills
Posted 1 month ago
5.0 - 9.0 years
13 - 17 Lacs
Mumbai
Work from Office
Requirements. Bachelor's or Master's in Computer Science or related field focused on language processing. 10+ years of experience in NLP with strong Python skills. Expertise in machine learning frameworks and LLMs like BERT, GPT. Deep understanding of NLP techniques for text representation, semantic extraction, data structures, and modeling. Experience in end-to-end product lifecycle: design, development, QA, deployment, and maintenance. Responsibilities. Design and develop NLP-based AI systems focusing on user intent, context, and output. Collaborate with product, design, and development teams for optimal UX. Conduct user research and refine models based on feedback. Stay updated on AI, NLP, and conversational tech trends. Maintain engineering documentation and best practices. Support QA and DevOps processes to enhance model performance. Job Details. Location: Mumbai, BKC. Mode: 5 days a week, in-office. Interview process. HR Screening. Technical Round. Technical Round. Final Round with Founder. Show more Show less
Posted 1 month ago
8.0 - 10.0 years
6 - 10 Lacs
Pune
Work from Office
Role TitleSenior Frontend Engineer Reports To Engineering Manager / Frontend Architect Location Pune- Kharadi Experience Required 8+ Years Role Summary As a Senior Frontend Engineer, you will lead the development of high-performance, scalable, and accessible web components using modern JavaScript, TypeScript, and web standards. You'll architect and maintain design systems, implement robust testing strategies, and ensure seamless user experience through reusable UI components and frameworks. Your work will be central to building reliable, maintainable, and user-friendly interfaces. Key Responsibilities Design, develop, and maintain reusable web components using modern web standards. Architect and evolve scalable design systems using CSS custom properties, theming, and component APIs. Implement and maintain CI-driven testing using Jest, Storybook, and visual regression tools. Optimize frontend builds through bundling, tree shaking, semantic versioning, and monorepo strategies. Ensure accessibility compliance (WCAG, a11y) and integrate accessibility testing into the workflow. Collaborate with cross-functional teams (designers, backend engineers, QA) to deliver seamless features. Monitor application performance and implement improvements proactively. Required Qualifications & Skills Proficiency in TypeScript , JavaScript , and modern ES modules . Expertise in web component standards , lifecycle methods , reactive properties , and component APIs . Strong grasp of Vite , npm , monorepo architecture , and semantic versioning . Deep understanding of CSS custom properties , slots , and theming strategies. Hands-on experience in unit testing , component testing , interaction and visual regression testing . Familiarity with Storybook , Jest , testing automation , and accessibility audits . Soft Skills Strong problem-solving and architectural thinking. Attention to detail and a commitment to code quality. Excellent communication and documentation skills. Collaborative mindset with experience working in agile teams. Initiative-driven and self-organized in a fast-paced environment. Preferred Qualifications Experience building and scaling design systems . Contributions to open-source frontend tools or libraries. Knowledge of micro-frontend or modular architecture . Familiarity with CI/CD pipelines and DevOps practices. Key Relationships Internal : Product Managers UX/UI Designers Backend Developers QA/Test Automation Engineers External : Design System Contributors Accessibility Auditors Vendors/Third-party Tool Providers Role Dimensions Individual Contributor with potential to lead components or junior developers. Influencer in technical direction and design system evolution. Contributor to frontend quality standards and testing best practices. Success Measures (KPIs) Timely and high-quality delivery of reusable components. Coverage and performance of automated testing suites. Accessibility compliance across all UI features. Reduction in UI defects and production incidents. Contribution to documentation and internal knowledge sharing. Competency Framework Alignment Competency Area Expected Proficiency Technical Expertise Deep understanding of frontend and testing tools Code Quality & Testing Drives high coverage and automation culture Communication Clearly articulates decisions and issues Problem Solving Resolves complex UI and architecture challenges Collaboration Works well in cross-functional teams
Posted 1 month ago
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Description Summary The Data Scientist will work in teams addressing statistical, machine learning and data understanding problems in a commercial technology and consultancy development environment. In this role, you will contribute to the development and deployment of modern machine learning, operational research, semantic analysis, and statistical methods for finding structure in large data sets. Job Description Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is GE Aerospaces multidisciplinary research and engineering center. Pushing the boundaries of innovation every day, engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview: As a Data Scientist, you will be part of a data science or cross-disciplinary team on commercially-facing development projects, typically involving large, complex data sets. These teams typically include statisticians, computer scientists, software developers, engineers, product managers, and end users, working in concert with partners in GE business units. Potential application areas include remote monitoring and diagnostics across infrastructure and industrial sectors, financial portfolio risk assessment, and operations optimization. In this role, you will: Develop analytics within well-defined projects to address customer needs and opportunities. Work alongside software developers and software engineers to translate algorithms into commercially viable products and services. Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. Perform exploratory and targeted data analyses using descriptive statistics and other methods. Work with data engineers on data quality assessment, data cleansing and data analytics Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. Share and discuss findings with team members. Required Qualifications: Bachelor's Degree in Computer Science or STEM Majors (Science, Technology, Engineering and Math) with basic experience. Desired Characteristics: - Expertise in one or more programming languages and analytic software tools (e.g., Python, R, SAS, SPSS). Strong understanding of machine learning algorithms, statistical methods, and data processing techniques. - Exceptional ability to analyze large, complex data sets and derive actionable insights. Proficiency in applying descriptive, predictive, and prescriptive analytics to solve real-world problems. - Demonstrated skill in data cleansing, data quality assessment, and data transformation. Experience working with big data technologies and tools (e.g., Hadoop, Spark, SQL). - Excellent communication skills, both written and verbal. Ability to convey complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams - Demonstrated commitment to continuous learning and staying up-to-date with the latest advancements in data science, machine learning, and related fields. Active participation in the data science community through conferences, publications, or contributions to open-source projects. - Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements. Flexibility to work on diverse projects across various domains. Preferred Qualifications: - Awareness of feature extraction and real-time analytics methods. - Understanding of analytic prototyping, scaling, and solutions integration. - Ability to work with large, complex data sets and derive meaningful insights. - Familiarity with machine learning techniques and their application in solving real-world problems. - Strong problem-solving skills and the ability to work independently and collaboratively in a team environment. - Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Domain Knowledge: Demonstrated awareness of industry and technology trends in data science Demonstrated awareness of customer and stakeholder management and business metrics Leadership: Demonstrated awareness of how to function in a team setting Demonstrated awareness of critical thinking and problem solving methods Demonstrated awareness of presentation skills Personal Attributes: Demonstrated awareness of how to leverage curiosity and creativity to drive business impact Humble: respectful, receptive, agile, eager to learn Transparent: shares critical information, speaks with candor, contributes constructively Focused: quick learner, strategically prioritizes work, committed Leadership ability: strong communicator, decision-maker, collaborative Problem solver: analytical-minded, challenges existing processes, critical thinker Whether we are manufacturing components for our engines, driving innovation in fuel and noise reduction, or unlocking new opportunities to grow and deliver more productivity, our GE Aerospace teams are dedicated and making a global impact. Join us and help move the aerospace industry forward . Additional Information Relocation Assistance Provided: No
Posted 1 month ago
3.0 - 5.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job Description Summary The Data Science Specialist will develop and implement Artificial Intelligence based solutions across various disciplines in GE Aerospace under the guidance of senior team members. In this role, the candidate will contribute to the development and deployment of modern machine learning, artificial intelligence, statistical methods, operations research, semantic analysis etc. for finding structure in large data sets. Job Description Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is our multidisciplinary research and engineering center. Engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview The Data Scientist will be responsible to work on data science projects under the supervision of senior team members and deliver business outcomes. Key responsibilities include Development of data science models. Work alongside software developers and software engineers to translate algorithms into viable products and services. Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. Perform exploratory and targeted data analysis using descriptive statistics and other methods. Work with data engineers on data quality assessment, data cleansing and data analytics. Generate reports, annotated code, and other projects artifacts to document, archive, and communicate the work and outcomes. Share and discuss findings with team members. The Ideal Candidate The ideal candidate should have experience in Gen AI/LLM, python. Required Qualifications: Bachelor's Degree in Statistics, Machine Learning, Computer Science or related field Proficiency in Python (mandatory). Demonstrated skill at data cleansing, data quality assessment, and using analytics for solving business problems Demonstrated skill in the use of applied analytics, descriptive statistics, feature extraction and predictive analytics on datasets Demonstrated skill at data visualization and storytelling for an audience of stakeholders Strong communication and interpersonal skills Preferred Qualifications: Influences within peer group. Implements specific component(s) of the roadmap. Evaluates features using well known or prescribed recipes and appropriately down selects to valuable ones. Aware of models such as CART SVM RF Neural Net and the associated sub-models Knowledge on Gen AI/LLM Uses Cross Validation and other Verification & Validation techniques to build robust models from large data sets. Understands the types of issues that impact data quality Performs basic data cleaning operations such as flagging missing and invalid data etc. Fits normal parameters to data and assess goodness of fit Can use and interpret t-tests, ANOVA and basic hypothesis testing with good utilization of p-values. Codes using modular practices for reusage and object-oriented Effectively shows visualization of data exploration using box, bubble and matrix plots. Has a basic understanding of GE Aerospace business and how the tools they are developing create value. At GE Aerospace, we have a relentless dedication to the future of safe and more sustainable flight and believe in our talented people to make it happen. Here, you will have the opportunity to work on really cool things with really smart and collaborative people. Together, we will mobilize a new era of growth in aerospace and defense. Where others stop, we accelerate. #LI-VR1 Additional Information Relocation Assistance Provided: Yes
Posted 1 month ago
8.0 - 13.0 years
8 - 13 Lacs
Pune, Maharashtra, India
On-site
The successful candidate will work closely with ZS practice leadership and be responsible for evolving our practice, enriching our practice assets and collaterals, building and managing client relationships, generating new business engagements, and providing thought leadership in the Technology and Architecture Area. What You'll Do Design robust and scalable solutions consistent with ZS and industry practices; take advantage of existing assets and maintain a balance between architecture requirements and specific client needs. Drive technical architecture and design discussions with internal and client groups to brainstorm and finalize technology solutions. Collaborate with the Architecture & Engineering expertise center leadership to define the technology roadmap and work with the delivery team to put together a plan for technical implementation and stay on track. Stay current on latest technology trends and architecture patterns, and lead the effort to develop ZS POV for strategic decision-making. Engage with clients to understand their needs and provide tailored solutions. Advance ZS technology offerings by innovating and scaling tech assets, driving feasibility analysis to select technologies/platforms that provide the best solution. Define and establish a technical strategy, standards, and guidelines in the data architecture domain. Groom junior team members and maintain a culture of rapid learning and explorations to drive innovations/POCs on emerging technologies and architecture patterns. Participate and support business development activities. What You'll Bring Bachelor's degree with specialization in Computer Science, IT, or other computer-related disciplines. 8+ years of relevant experience in designing semantic architecture at an enterprise scale. Strong engineering mindset to build highly available and robust architecture frameworks, technology solutions, and reusable assets. Expertise in one or more initiatives like cloud strategy, IT transformation, and application portfolio assessment. Excellent communication and client engagement skills and ability to work in a fast-paced and dynamic environment. Experience in providing architecture alternatives, product evaluation and recommendations, POVs for implementation/adoption. Experience in scaling technology solutions aimed at solving complex business problems. Knowledge of all phases of solution development for large-scale solutions and experience working in agile teams with short release cycles. Strong technical team leadership, mentorship, and collaboration abilities.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Role & responsibilities Expertise in Graph Database : A deep understanding of Graph Database (Architecture, Structures, and Operations , query languages (such as SPARQL and Gremlin). Experience in AWS Neptune is preferred. Knowledge of Data Pipelines: Proficiency in designing and managing data pipelines is crucial for ensuring the efficient flow and transformation of data into the knowledge graph. High level of Proficiency in Python programming AWS services including EKS, K8s, S3, and Lambda Secondary Skills CI/CD , Kubernetes, Docker This is compulsory - Expertise in Graph Database and Python programming
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough