Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are hiring a Backend Developer to build and maintain scalable server-side logic. Ideal for engineers who enjoy working with data, performance, and APIs. Key Responsibilities: Design and implement backend services and APIs Manage databases and application logic Ensure performance, security, and scalability Collaborate with frontend and DevOps teams Required Skills & Qualifications: Proficiency in backend frameworks (Node.js, Django, Spring Boot, etc.) Strong database experience (SQL and NoSQL) Familiarity with REST, GraphQL, and microservices Bonus: Knowledge of CI/CD and containerization Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Full stack core software engineer with deep understanding of Java/Python and its ecosystems, and strong hands-on experience in building high-performing, scalable, enterprise-grade applications. You will be part of a talented software team that works on mission-critical applications. As a full stack core software engineer, your responsibilities include understanding user requirements and working with a development team on the design, implementation and deliver of Java/Python application while providing expertise in the full software development lifecycle, from concept and design to testing. Candidate will be working closely with business architecture group to design and implement current and target state business process by using various tools and technologies. Candidate should ideally be having knowledge in few of these technologies like Java/Python/Unix technology stack, Angular, java script, SQL / NonSQL and Graph DB are used for data storage (we tailor the tools to the needs) and is integrated with other bank systems via RESTful APIs/web services and Kafka Streams. Qualifications: 8 to 12 years of industry experience, with a strong hands-on experience in the hands-on development of mission-critical applications using Java/Python technologies, aligning each project with the firm's strategic objectives, and overseeing team operations to ensure project success. Experience with complex system integration projects. Java, Spring, Spring Boot, Spring Cloud, J2EE Design Patterns, REST services. Front End Technologies like JavaScript and Angular version, CSS2/CSS3, HTML Strong Knowledge of SQL, JDBC, Unix commands. Hands-on Database experience in relational (Oracle/DB2) and No-SQL (MongoDB). Hands-on experience on working / deploying application on Cloud. Hands-on experience in code testing tools like Junit / Mockito / Cucumber. Deployment Acquaintance in Apache Tomcat, Open shift or other cloud environments. Expertise in Test driven development (Junit, JMeter), Continuous Integration (Jenkins), Build tool (Maven) and Version Control (Git), Development tools (Eclipse, IntelliJ). Excellent communication skills (written and verbal), ability to work in a team environment. Excellent analytical and problem-solving skills and the ability to work well independently. Experience working with business analysts, database administrators, project managers and technical architects in multiple geographical areas. Experience in the Financial Services industry is added advantage. Understanding Financial and Reporting Hierarchies will be beneficial. Required Skills: Minimum 8 to 12 years of application development experience in Java with: Spring Boot & Microservices; REST Web Services; JPA with hibernate; Core Java. Minimum 3+ years of Hands-on experience in designing architecture for enterprise applications. Angular and Java Script Experience in working on a native cloud platform. Experience with development IDEs such as Eclipse and IntelliJ Experience with SQL/NONSQL such as Oracle, PostgreSQL, Neo4j, and MongoDB Experience with caching framework such as Redis. Experience with CI/CD systems such as helm and harness. Experience with messaging services such as Kafka. Experience in Python, Unix shell scripting will be an added plus Excellent trouble shooting skills. Strong problem-solving skills, business acumen, and demonstrated excellent oral and written communication skills with both technical and non-technical audiences. Experience with Agile Software Development Lifecycle methodology and related tooling. For example -JIRA, Scrum. Education : Bachelor’s or equivalent degree in Computer Science ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
6.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you are responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing (to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Leading and being hands-on for the technical design, development, testing, implementation, and support of data pipelines that load the data domains in the Enterprise Data Fabric and associated data services. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Be able to translate data models (ontology, relational) into physical designs that performant, maintainable, easy to use. Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and graph data stores ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Experience with ETL tools such as Apache Spark, Prophecy and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Able to take user requirements and develop data models for data analytics use cases. Good-to-Have Skills: Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Experience using graph databases such as Stardog , Marklogic , Neo4J , Allegrograph, etc. and writing SPARQL queries. Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Position Summary The Software Engineer II, Aera DI role is accountable for developing data solutions and operations support of the Enterprise data lake. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements, and support data operations activities. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Data and Analytics Enablement organization. Key Responsibilities The Software Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs to support GPS Responsible for delivering high quality, data products and analytic ready data solutions Develop and maintain data models to support our reporting and analysis needs. Develop ad-hoc analytic solutions from solution design to testing, deployment, and full lifecycle management. Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Participate in the analysis, design, build, manage, and operate lifecycle of the enterprise data lake and analytics focused digital capabilities Develop cloud-based (AWS) data pipelines to facilitate data processing and analysis Build e-2-e data ETL pipelines from data integration -> data processing -> data integration -> visualization Proficient Python/node.js along with UI technologies like Reacts.js, Spark, SQL, AWS Redshift, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles. Good to have any Knowledge on Neo4J, IAM, CFT & other Native AWS Service familiarity with data lake principles. Familiarity and experience with Cloud infrastructure management and work closely with the Cloud engineering team Participate in effort and cost estimations when required Partner with other data, platform, and cloud teams to identify opportunities for continuous improvements Architect and develop data solutions according to legal and company guidelines Assess system performance and recommend improvements Responsible for maintaining of data acquisition/operational focused capabilities including Data Catalog; User Access Request/Tracking; Data Use Request If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Position Summary The GPS Data & Analytics Software Engineer role is accountable for developing data solutions and operations support of the Enterprise data lake. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements, and support data operations activities. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Data and Analytics Enablement organization. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs to support GPS Responsible for delivering high quality, data products and analytic ready data solutions Develop and maintain data models to support our reporting and analysis needs. Develop ad-hoc analytic solutions from solution design to testing, deployment, and full lifecycle management. Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Participate in the analysis, design, build, manage, and operate lifecycle of the enterprise data lake and analytics focused digital capabilities Develop cloud-based (AWS) data pipelines to facilitate data processing and analysis Build e-2-e data ETL pipelines from data integration -> data processing -> data integration -> visualization Proficient Python/node.js along with UI technologies like Reacts.js, Spark, SQL, AWS Redshift, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles. Good to have any Knowledge on Neo4J, IAM, CFT & other Native AWS Service familiarity with data lake principles. Familiarity and experience with Cloud infrastructure management and work closely with the Cloud engineering team Participate in effort and cost estimations when required Partner with other data, platform, and cloud teams to identify opportunities for continuous improvements Architect and develop data solutions according to legal and company guidelines Assess system performance and recommend improvements Responsible for maintaining of data acquisition/operational focused capabilities including Data Catalog; User Access Request/Tracking; Data Use Request If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
6 - 9 Lacs
Chennai, Guindy
Work from Office
Ciena Blueplanet Inventory - Lead Engineer Chennai - Guindy, India Information Technology 16788 Overview Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets. Responsibilities Key Responsibilities: Meta Data Modeling: Develop and implement meta data models that represent complex data structures and relationships across the system. Collaborate with cross-functional teams to design flexible, efficient, and scalable meta data models to support application and data processing requirements. Software Development (Java & Spring Boot): Develop high-quality, efficient, and scalable Java applications using Spring Boot and other Java-based frameworks. Participate in full software development lifecycledesign, coding, testing, deployment, and maintenance. Optimize Java applications for performance and scalability. UI Development (Angular)(Optional) Design and implement dynamic, responsive, and user-friendly web UIs using Angular. Integrate the UI with backend microservices, ensuring a seamless and efficient user experience. Ensure that the UI adheres to best practices in terms of accessibility, security, and usability. Containerization & Microservices (Kubernetes): Design, develop, and deploy microservices using Kubernetes to ensure high availability and scalability of applications. Use Docker containers and Kubernetes for continuous deployment and automation of application lifecycle. Maintain and troubleshoot containerized applications in a cloud or on-premise Kubernetes environment. Requirements Database Management (Postgres & Neo4j): Design and implement database schemas and queries for both relational databases (Postgres) and graph databases (Neo4j). Develop efficient data models and support high-performance query optimization. Collaborate with the data engineering team to integrate data pipelines and ensure the integrity of data storage. Business Process Modeling (BPMN): Utilize BPMN to model business processes and workflows. Design and optimize process flows to improve operational efficiency. Work with stakeholders to understand business requirements and implement process automation. Rule Engine (Drools Rules): Implement business logic using the Drools Rules Engine to automate decision-making processes. Work with stakeholders to design and define business rules and integrate them into applications. Ingestion Framework: Build and maintain robust data ingestion frameworks that process large volumes of data efficiently. Ensure proper data validation, cleansing, and enrichment during the ingestion process.
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic, including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery, XPath, or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins’ “Data Science Specialization” on Coursera) Other Graph Certifications (optional But Beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or O’Reilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Job Summary We are seeking an Azure Certified AI & MLOps Engineer with deep expertise in AI/ML use case development, MLOps, and cloud-based deployments, especially within the real estate and construction domains. The ideal candidate will be adept at designing predictive and generative AI models, managing scalable pipelines using Docker and Azure, and integrating complex datasets across SQL, vector, and graph databases. You will work closely with business stakeholders to deliver impactful AI-driven solutions that streamline processes and enhance decision-making. Must Have Skills (Mandatory) * Azure AI Engineer Associate certification * Proficiency in Python and AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn) * Strong hands-on experience in MLOps pipelines using MLflow, Kubeflow, or TFX * Expertise in Docker for containerization and orchestration of AI/ML applications * Advanced SQL skills for querying and managing structured datasets * Experience with Azure cloud services (Azure Databricks, Azure ML, Microsoft Fabric) * Knowledge of graph databases (Neo4j) and vector databases for ML use cases * Proven track record of deploying AI/ML solutions in real estate or construction industries Good-to-have Skills (Optional) * Databricks Certification (e.g., Databricks Certified Machine Learning Professional) * Additional certifications in Machine Learning, Deep Learning, or Generative AI * Docker Certified Associate (DCA) * Experience integrating industry-specific datasets with external APIs and Azure services * Familiarity with CI/CD pipelines and infrastructure-as-code tools Qualifications & Experience * Bachelor s or Master s degree in Computer Science, Data Science, or related field * Minimum 4+ years of experience in AI/ML engineering with focus on MLOps and Azure ecosystem * Strong collaboration skills for working with business and IT stakeholders * Demonstrated experience in AI-driven applications such as chatbots, demand forecasting, property valuation, and risk assessment
Posted 2 weeks ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Full stack core software engineer with deep understanding of Java/Python and its ecosystems, and strong hands-on experience in building high-performing, scalable, enterprise-grade applications. You will be part of a talented software team that works on mission-critical applications. As a full stack core software engineer, your responsibilities include understanding user requirements and working with a development team on the design, implementation and deliver of Java/Python application while providing expertise in the full software development lifecycle, from concept and design to testing. Candidate will be working closely with business architecture group to design and implement current and target state business process by using various tools and technologies. Candidate should ideally be having knowledge in few of these technologies like Java/Python/Unix technology stack, Angular, java script, SQL / NonSQL and Graph DB are used for data storage (we tailor the tools to the needs) and is integrated with other bank systems via RESTful APIs/web services and Kafka Streams. Qualifications: 7+ years of industry experience, with a strong hands-on experience in the hands-on development of mission-critical applications using Java/Python technologies, aligning each project with the firm's strategic objectives, and overseeing team operations to ensure project success. Experience with complex system integration projects. Java, Spring, Spring Boot, Spring Cloud, J2EE Design Patterns, REST services. Front End Technologies like JavaScript and Angular version, CSS2/CSS3, HTML Strong Knowledge of SQL, JDBC, Unix commands. Hands-on Database experience in relational (Oracle/DB2) and No-SQL (MongoDB). Hands-on experience on working / deploying application on Cloud. Hands-on experience in code testing tools like Junit / Mockito / Cucumber. Deployment Acquaintance in Apache Tomcat, Open shift or other cloud environments. Expertise in Test driven development (Junit, JMeter), Continuous Integration (Jenkins), Build tool (Maven) and Version Control (Git), Development tools (Eclipse, IntelliJ). Excellent communication skills (written and verbal), ability to work in a team environment. Excellent analytical and problem-solving skills and the ability to work well independently. Experience working with business analysts, database administrators, project managers and technical architects in multiple geographical areas. Experience in the Financial Services industry is added advantage. Understanding Financial and Reporting Hierarchies will be beneficial. Education : Bachelor’s or equivalent degree in Computer Science Experience : Minimum 7 + years of relevant experience developing applications/solutions preferably in the financial services industry. Required Skills: Minimum 7 + years of application development experience in Java/Python with: Spring Boot & Microservices; REST Web Services; JPA with hibernate; Core Java/Python. Minimum 3+ years of Hands-on experience in designing architecture for enterprise applications. Angular and Java Script Experience in working on a native cloud platform. Experience with development IDEs such as Eclipse and IntelliJ Experience with SQL/NONSQL such as Oracle, PostgreSQL, Neo4j, and MongoDB Experience with caching framework such as Redis. Experience with CI/CD systems such as helm and harness. Experience with messaging services such as Kafka. Experience in Python, Unix shell scripting will be an added plus Excellent trouble shooting skills. Strong problem-solving skills, business acumen, and demonstrated excellent oral and written communication skills with both technical and non-technical audiences. Experience with Agile Software Development Lifecycle methodology and related tooling. For example -JIRA, Scrum. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-205390 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Feb. 13, 2025 CATEGORY: Information Systems At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will will be at the forefront of innovation, using their skills to design and implement pioneering AI/Gen AI solutions. With an emphasis on creativity, collaboration, and technical excellence, this role provides a unique opportunity to work on ground-breaking projects that enhance operational efficiency at the Amgen Technology and Innovation Centre while ensuring the protection of critical systems and data. Roles & Responsibilities: Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI API, Open Source LLMs (Llama2, Mistral, Mixtral), and frameworks like Langchain and Haystack. Design and implement AI & GenAI solutions that drive productivity across all roles in the software development lifecycle. Demonstrate the ability to rapidly learn the latest technologies and develop a vision to embed the solution to improve the operational efficiency within a product team Collaborate with multi-functional teams (product, engineering, design) to set project goals, identify use cases, and ensure seamless integration of Gen AI solutions into current workflows. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Programming Languages such as Java and Python experience OR Bachelor’s degree and 3 to 5 years of Programming Languages such as Java and Python experience OR Diploma and 7 to 9 years of Programming Languages such as Jav and Python experience Preferred Qualifications: Proficiency in programming languages such as Python and Java. Leverage advanced knowledge of Python open-source software stack such as Django or Flask, Django Rest or FastAPI, etc. Experience working with RAG technologies and LLM frameworks, LLM model registries (Hugging Face), LLM APIs, embedding models, and vector databases Familiarity with cloud security (AWS /Azure/ GCP) Utilize expertise in integrating and demonstrating Gen AI LLMs to maximize operational efficiency.Productivity Tools and Technology Engineer Good-to-Have Skills: Experience with graph databases (Neo4J and Cypher would be a big plus) Experience with Prompt Engineering and familiarity with frameworks such as Dspy would be a big plus Professional Certifications: AWS / GCP / Databricks Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 2 weeks ago
9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Database Administrator (MySQL/MongoDB/Linux) Experience: 6–9 years Department: IT Infrastructure / Database Management Reporting to: IT Infrastructure Manager / Technical Architect Job Summary We are seeking an experienced and highly skilled Senior Database Administrator (DBA) to manage and optimize our MySQL, Neo4j, and MongoDB database environments. The ideal candidate will have strong experience in database administration, performance tuning, backup & recovery, and Linux-based server environments. This role will be critical in ensuring high availability, data integrity, and optimal database performance. Key Responsibilities • Administer, maintain, monitor, and optimize MySQL, Neo4j, and MongoDB databases across development, staging, and production environments. • Perform installation, configuration, and upgrades of databases on Linux servers. • Ensure database backup/recovery, replication, high availability, and disaster recovery strategies. • Conduct query optimization, performance tuning, and resource management. • Monitor database performance and proactively identify potential issues. • Ensure data security, user access control, and compliance with internal data policies. • Automate routine maintenance tasks using scripts or scheduling tools. • Collaborate with DevOps, Development, and Infrastructure teams for deployment, scaling, and issue resolution. • Maintain documentation related to database configurations, processes, and access controls. Required Skills & Qualifications • Bachelor’s degree in Engineering, Information Technology, or related field. • 6+ years of hands-on experience in MySQL, Neo4j, and MongoDB database administration. • Strong experience in Linux server environments (CentOS, Ubuntu, RHEL). • Proven expertise in performance tuning, replication, and query optimization. • Knowledge of backup and recovery tools (e.g., Percona XtraBackup, mongodump/mongorestore). • Familiarity with monitoring tools like Prometheus, Grafana, Nagios, or equivalent. • Good understanding of networking concepts, storage, and system security. • Scripting experience with Shell/Python is a plus. • Exposure to cloud database services (AWS RDS, Timeseries databases, Postgress, Atlas, etc.) is desirable. Preferred Certifications • MySQL Database Administrator Certification • MongoDB Certified DBA Associate • Linux System Administration Certification (Red Hat, CompTIA, etc.) Soft Skills • Strong analytical and troubleshooting skills. • Effective communication and stakeholder management. • Ability to work independently and as part of a cross-functional team. • Strong sense of ownership and commitment to uptime and data reliability. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: We are seeking a Neo4j Developer to work on an innovative GPS data modeling project within the Biosphere platform . You will be responsible for transforming Tablized GPS datasets into an optimized graph structure based on provided Lucidchart models (Moch GPS and GPS Graph Definition). Your work will directly support next-gen spatial analysis and environmental intelligence systems. Responsibilities: Build and optimize Neo4j graph structures to represent geospatial GPS data . Implement Cypher queries and graph algorithms for querying and analyzing GPS routes, relationships, and entities. Translate Lucidchart designs (Moch GPS and GPS Graph definitions) into robust Neo4j schemas. Ingest and map the Tablized GPS dataset into the graph database. Collaborate with stakeholders to align graph models with the Biosphere platform’s goals . Ensure data integrity, performance, and scalability of the GPS graph model. Requirements: 2–3 years of experience working with Neo4j and Cypher query language . Proven ability to model and implement complex graph structures . Strong understanding of geospatial data , GPS data processing, and spatial relationships. Experience integrating data from CSV, Excel, or JSON into Neo4j. Ability to work with documentation and diagrams (Lucidchart) to build data systems. Bonus: Familiarity with environmental or biosphere-related platforms . Perks: Join a mission-driven tech company supporting next-gen environmental platforms. Opportunity to work on a cutting-edge graph database project with real-world applications. Full-time, on-site role (Monday–Friday, Day Shift) in Roseburg, Oregon. Collaborative and growth-oriented work culture. Show more Show less
Posted 2 weeks ago
18.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Name of company: The Smart Fellowship by Workverse Join our mission: We are building an automation-proof workforce for Bharat. We are rooting for Team Humans by training graduates to think, solve, and communicate beyond what AI can do. We want smart fellows to thrive alongside AI and remain in control - instead of being replaced by it. What we do: Formerly known as X Billion Skills Lab (since 2017), The Smart Fellowship is a hybrid workplace simulation where learners master in-demand workplace skills and GenAI skills through role play. In our immersive & narrative based experience, learners "work" in imaginary companies and solve 50+ generalist workplace scenarios - to build a strong intellectual foundation for rapid growth in the real world of work. Till date we have worked with 50,000+ learners and are even a credit program in one of India's top private universities. The best part about this role: Get direct exposure to customer relationship building, HR strategy, and operations in a fast-growing startup Opportunity to work closely with leadership and see your ideas in action Contribute to building a future-ready, human-first workforce in the age of AI Location: Khar West, Mumbai (Work from office) P.S. We’re looking for someone who genuinely cares about the work we’re doing and sees themselves growing with us. If it’s the right fit on both sides, we’d love to offer a long term commitment with fast tracked career growth. Meet the founder Samyak Chakrabarty has been featured by Forbes as one of Asia's most influential young entrepreneurs and has founded several social impact initiatives that have won national and international recognition. He has over 18 years of entrepreneurial experience and is on a mission to empower humans to outsmart artificial intelligence at work. Till date his work has positively impacted 1,00,000+ youth across the nation. For more information please visit his linkedin profile . Your role: As an AI/ML Architect at Workverse, you'll play a key role in shaping intelligent agents that assist in enterprise decision-making, automate complex data science workflows, and integrate seamlessly into our simulation. These agents will shape Neuroda, World's first AI soft-skills coach and workplace mentor leveraging reasoning, tool use, and memory—while staying aligned with our focus on enhancing the soft-skills learning experience within our simulation environment. We're seeking a strong engineering generalist with deep experience in LLMs, agent frameworks, and production-scale systems. You’ve likely prototyped or shipped agent-based systems, pushed the boundaries of what LLMs can do, and are looking for a meaningful opportunity to build the future of human-first workforce in the age of AI Responsibilities: Lead the design and development of Enterprise AI Agents and Data Science Agent systems that combine reasoning, tool orchestration, and memory. Collaborate with product, research, and infrastructure teams to create scalable agent architectures tailored for enterprise users. Build agent capabilities for tasks like automated analysis, reporting, data wrangling, and domain-specific workflows across business verticals. Integrate real-time knowledge, enterprise APIs, RAG pipelines, and proprietary tools into agentic workflows. Work closely with the alignment and explainability teams to ensure agents remain safe, auditable, and transparent in their reasoning and output. Continuously evaluate and incorporate advances in GenAI (e.g., controllability, multi-modal models, memory layers) into the agent stack. Requirements: Demonstrated experience building with LLMs and Agentic frameworks (e.g., LangChain, LangFlow, Semantic Kernel, CrewAI, Haystack, ReAct, AutoGPT, etc Experience with productionizing AI/LLM workflows and integrating them into real-world applications or systems 2+ years of software engineering experience, ideally with some time in early-stage startups or AI-first environments. Strong Python skills and a solid understanding of full-stack backend architecture (APIs, cloud infrastructure(AWS), relational, non-relational, graph-database (SQL, NO-SQL, ArangoDB, Neo4j) (Bonus) Experience working on agent toolchains for data science, ML ops, game data science and game analytics. Think you’re the one? Apply: Double check if you are comfortable with the work-from-office requirement Share your CV with tanvi@workverse.in , along with a brief note about why you think this role was made for you! Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Technical Experience/Knowledge Needed Cloud-hosted services environment. Proven ability to work in a Cloud-based environment. Ability to manage and maintain Cloud Infrastructure on AWS Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc. Knowledge in orchestration tools Ansible Experience with ELK Stack Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques. Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git. Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on. Proficient in bash Scripting Languages. Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc. Skill Required (Other) AWS Certified Solutions Architect or/and Linux System Administrator Strong ability to work independently on complex issues Collaborate efficiently with internal experts to resolve customer issues quickly Additional No objection to working night shifts as the production support team works on 24-7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us. Early Joining Willingness to work in Delhi NCR Skills : DevOps, Amazon Web Services (AWS), Linux/Unix, Shell Scripting, Jenkins, MySQL, NewRelic, Git, Amazon RDS, Amazon CloudFront, Elastic Search, Apache Kafka, Neo4J, Docker, Ansible and Kubernetes (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Role Overview We are seeking a highly skilled and experienced Senior Full Stack Engineer to join our dynamic team. You will play a crucial role in designing, developing, deploying, and maintaining highly resilient, low-latency web applications that form the core of our user experience. We're looking for a hands-on expert with deep proficiency in the modern JavaScript ecosystem, particularly Node.js, TypeScript, and React. While your core expertise lies in JavaScript technologies, experience developing backend systems with Python and/or Java is valuable. As a senior member of the team, you will significantly influence our technical direction, mentor other engineers, and champion software development best practices. Key Responsibilities Take ownership of the design, development, testing, deployment, and maintenance of robust, scalable,highly resilient, low latency web applications Lead the implementation of complex features, focusing on performant front-end solutions (React, TypeScript) and efficient back-end services (primarily Node.js) Architect and implement solutions optimized for speed, scalability, and reliability across the entire stack Design, build, document, and maintain clean, efficient, and scalable APIs Collaborate effectively with product managers, designers, and fellow engineers to translate requirements into well-architected technical solutions Write high-quality, maintainable, secure, and thoroughly tested code Actively participate in code reviews, providing constructive feedback Diagnose, troubleshoot, and resolve complex technical issues across all environments Mentor junior and mid-level engineers, fostering their technical growth Stay abreast of emerging web technologies, evaluating and proposing their adoption where beneficial Contribute significantly to architectural discussions, helping to shape our technical landscape Required Qualifications & Skills 5+ years of professional experience in full-stack software development, with a proven track record of shipping complex web applications Demonstrable experience building and operating web applications with high availability and low-latency. Strong proficiency in JavaScript and TypeScript Extensive experience with Node.js for building scalable back-end services Strong proficiency in React and its ecosystem (state management, hooks) Solid command of modern web technologies (HTML5, CSS3) Experience designing and building robust APIs following RESTful principles Understanding of fundamental software engineering principles and architectural design patterns Experience working with relational databases and at least one NoSQL database Proficiency with Git and modern CI/CD practices Experience with testing frameworks (unit, integration, end-to-end) Strong analytical, problem-solving, and debugging capabilities Excellent communication and interpersonal skills Preferred Qualifications & Skills Experience with Python (Django, Flask, FastAPI) and/or Java (Spring Boot) Familiarity with graph databases, particularly Neo4j Cloud platform experience (AWS, Azure, or GCP) Experience with Docker and Kubernetes Knowledge of microservices architecture patterns Experience with caching strategies (Redis, Memcached) Understanding of message queues and event-driven architecture Experience with observability tools for monitoring, logging, and tracing Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-70798-2 Job Description Role Title: AVP, Senior Product Engineer (L10) Company Overview: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.. Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization. Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions. Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts Role Summary/Purpose: We are looking for an strong Individual contributor, platform administrator who will work in building and managing the NEO4J platform for scanning data sources across on-prem environments.The engineer will work cross-functionally with operations, other data engineers and product owner to assure capabilities are delivered that meet business needs Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions. Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts Key Responsibilities Experience with Neo4J HA Architecture for Critical Applications (Clustering, Multiple Data Centers, etc.) with LDAP Based Authentication & Authorization for Neo4J. Expertise in graph driven data science applications, such as graph based feature engineering, graph embedding, or graph neural networks. Experience with Encryption Solutions for Neo4J. Experience in high availability and disaster recovery solutions deployment of new technology Superior decision-making, client relationship, and vendor management skills. Required Skills/Knowledge Experience in Neo4j product administration and development. Basic understanding of LINUX OS (understanding of file systems, environment, user and groups) Basic understanding of firewalls, ports and how to check connectivity between 2 environments Exposure to public cloud ecosystem (AWS, Azure and GCP) and its components Understanding on DevOps pipelines Exposure to Operations task like Job Scheduling, monitoring, Health check of the platforms, automations etc. Understanding of SAFe methodology/working in Agile environment Desired Skills/Knowledge Experience on installation and configuration of Bloom and GDS software. Hands on experience with cloud services such as S3, Redshift, etc. Extensive experience with deploying and managing applications running on Kubernetes (experience with administering Kubernetes clusters). Experience deploying and working with observability systems such as: Prometheus, Grafana, New-Relic, Splunk Logging. Eligibility Criteria Bachelor's degree in computer Science with minimum 4+ years of relevant technology experience or in lieu of degree 6+ years of relevant technology experience. Minimum 5+ years of financial services experience. Minimum 5+ years of experience managing Data platforms. Hands on experience with cloud platforms such as S3, Redshift, etc. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated competency linking business strategy with IT technology initiatives Proven track record of leading and executing on critical business initiatives on-time and within budget. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology. Superior decision-making, client relationship, and vendor management skills. Work Timings: 3PM - 12AM IST (This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details.) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L08+ Employees can apply Grade/Level: 10 Job Family Group Information Technology Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
India
Remote
Location: Remote / Hybrid (India) Experience Level: 5+ years Employment Type: Full-time Tech Stack: Ruby on Rails, Python, Canvas LMS, Neo4j, React, LTI, SCORM, xAPI About The Project We're hiring for a large-scale U.S.-based Learning Management System (LMS) platform focused on delivering adaptive learning experiences. The platform integrates open-source Canvas LMS , combines Ruby on Rails and Python , and leverages Neo4j for personalized learning paths. You’ll be working on customizing LMS features, integrating AI/ML learning engines, and building graph-driven adaptation logic — all while ensuring a smooth learning journey for users. Your Role Lead full-stack development efforts across RoR and Python Customize and extend Canvas LMS (open-source) Integrate Neo4j graphs and adaptive learning models Work with AI/ML teams to connect intelligent backend systems Collaborate cross-functionally with product and design teams Must-Have Skills 5+ years of full-stack development experience Expert in Ruby on Rails and open-source customization Strong in Python with integration experience Familiarity with Canvas LMS APIs, LTI, SCORM/xAPI Experience with Neo4j, Cypher, and graph modeling Modern JavaScript (React preferred) RESTful APIs, Git, CI/CD pipelines Nice-to-Have Experience in EdTech, MedTech, or HealthTech Familiarity with Canvas deployment or SIS integrations Exposure to AI/ML or NLP in learning systems Knowledge of HIPAA/GDPR compliance Why Join Us? Work on a meaningful, large-scale EdTech project Flexible remote work environment Collaborate with a global team of passionate engineers Build future-ready adaptive learning technology Skills: ci/cd,restful apis,scorm,ruby,neo4j,react,ruby on rails,python,lti,git,stack,ai/ml,xapi,canvas lms,canvas,lms Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB_POSTING-3-70798-5 Job Description Role Title: AVP, Senior Product Engineer (L10) Company Overview: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.. Organizational Overview: This role will be part of the Data Architecture & Analytics group part of CTO organization. Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions. Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts Role Summary/Purpose: We are looking for an strong Individual contributor, platform administrator who will work in building and managing the NEO4J platform for scanning data sources across on-prem environments.The engineer will work cross-functionally with operations, other data engineers and product owner to assure capabilities are delivered that meet business needs Data team is responsible for designing and developing scalable data pipelines for efficient data ingestion, transformation, and loading(ETL). Data team owns and manages different tools platforms which provides an environment for designing and building different data solutions. Collaborating with cross-functional teams to integrate new data sources and ensure data quality and consistency. Building and maintaining data models to facilitate data access and analysis by Data Scientists and Analysts Key Responsibilities Experience with Neo4J HA Architecture for Critical Applications (Clustering, Multiple Data Centers, etc.) with LDAP Based Authentication & Authorization for Neo4J. Expertise in graph driven data science applications, such as graph based feature engineering, graph embedding, or graph neural networks. Experience with Encryption Solutions for Neo4J. Experience in high availability and disaster recovery solutions deployment of new technology Superior decision-making, client relationship, and vendor management skills. Required Skills/Knowledge Experience in Neo4j product administration and development. Basic understanding of LINUX OS (understanding of file systems, environment, user and groups) Basic understanding of firewalls, ports and how to check connectivity between 2 environments Exposure to public cloud ecosystem (AWS, Azure and GCP) and its components Understanding on DevOps pipelines Exposure to Operations task like Job Scheduling, monitoring, Health check of the platforms, automations etc. Understanding of SAFe methodology/working in Agile environment Desired Skills/Knowledge Experience on installation and configuration of Bloom and GDS software. Hands on experience with cloud services such as S3, Redshift, etc. Extensive experience with deploying and managing applications running on Kubernetes (experience with administering Kubernetes clusters). Experience deploying and working with observability systems such as: Prometheus, Grafana, New-Relic, Splunk Logging. Eligibility Criteria Bachelor's degree in computer Science with minimum 4+ years of relevant technology experience or in lieu of degree 6+ years of relevant technology experience. Minimum 5+ years of financial services experience. Minimum 5+ years of experience managing Data platforms. Hands on experience with cloud platforms such as S3, Redshift, etc. Ability to develop and maintain strong collaborative relationships at all levels across IT and the business. Excellent written and oral communication skills, along with a strong ability to lead and influence others. Experience working iteratively in a fast-paced agile environment. Demonstrated competency linking business strategy with IT technology initiatives Proven track record of leading and executing on critical business initiatives on-time and within budget. Demonstrated ability to drive change and work effectively across business and geographical boundaries. Expertise in evaluating technology and solution engineering, with strong focus on architecture and deployment of new technology. Superior decision-making, client relationship, and vendor management skills. Work Timings: 3PM - 12AM IST (This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details.) For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L08+ Employees can apply Grade/Level: 10 Job Family Group Information Technology Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Machine Learning Graph Engineer Job Description: Location: Hyderabad, India - Hybrid Remote (3 days in office, 2 days remote) Position Overview: We are seeking a talented and experienced Machine Learning Engineer with a strong background in graph databases, particularly Neo4j, to join our dynamic team. The ideal candidate will be instrumental in developing and enhancing our knowledge bases and Retrieval-Augmented Generation (RAG) models, driving the accuracy and efficiency of our AI-powered solutions. You will play a key role in deploying cutting-edge models that enhance the AI features of our end-user applications, ensuring they meet the evolving needs of our customers. Key Responsibilities: Design, build, and improve ML models, focusing on retraining and optimizing open-source models using Neo4j or similar graph databases. Deploy and continuously refine open-source models to enhance AI features within end-user applications, ensuring high performance and reliability. Keep abreast of the latest industry and academic research to integrate new technologies that enhance or replace existing solutions and products. Collaborate closely with Product Managers and customers to understand requirements and translate them into effective ML solutions that address real-world problems. Minimum Qualifications: 2+ years of experience as a Machine Learning Engineer, Applied Scientist, or equivalent role, with a proven track record of developing and deploying ML models. Strong proficiency in Python and familiarity with common machine learning tools (e.g., Spark, PyTorch). Experience in deploying models and refining them based on iterative customer feedback. Demonstrated expertise in optimizing distributed model training processes, preferably with GPUs. A deep passion for Generative AI and Large Language Models (LLMs), with a keen interest in staying ahead of industry trends. Previous experience with graph databases, specifically Neo4j, and an understanding of their application in building knowledge bases and RAG models . Desirable Skills: Experience with Databricks or similar platforms is considered a plus. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. A problem-solving mindset, with a strong emphasis on creativity and innovation in approaching complex technical challenges. Show more Show less
Posted 2 weeks ago
3.0 - 5.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Project description Join our data engineering team to lead the design and implementation of advanced graph database solutions using Neo4j. This initiative supports the organization's mission to transform complex data relationships into actionable intelligence. You will play a critical role in architecting scalable graph-based systems, driving innovation in data connectivity, and empowering cross-functional teams with powerful tools for insight and decision-making. Responsibilities Graph Data Modeling & Implementation. Design and implement complex graph data models using Cypher and Neo4j best practices. Leverage APOC procedures, custom plugins, and advanced graph algorithms to solve domain-specific problems. Oversee integration of Neo4j with other enterprise systems, microservices, and data platforms. Develop and maintain APIs and services in Java, Python, or JavaScript to interact with the graph database. Mentor junior developers and review code to maintain high-quality standards. Establish guidelines for performance tuning, scalability, security, and disaster recovery in Neo4j environments. Work with data scientists, analysts, and business stakeholders to translate complex requirements into graph-based solutions. Skills Must have 12+ years in software/data engineering, with at least 3-5 years hands-on experience with Neo4j. Lead the technical strategy, architecture, and delivery of Neo4j-based solutions. Design, model, and implement complex graph data structures using Cypher and Neo4j best practices. Guide the integration of Neo4j with other data platforms and microservices. Collaborate with cross-functional teams to understand business needs and translate them into graph-based models. Mentor junior developers and ensure code quality through reviews and best practices. Define and enforce performance tuning, security standards, and disaster recovery strategies for Neo4j. Stay up-to-date with emerging technologies in the graph database and data engineering space. Strong proficiency in Cypher query language, graph modeling, and data visualization tools (e.g., Bloom, Neo4j Browser). Solid background in Java, Python, or JavaScript and experience integrating Neo4j with these languages. Experience with APOC procedures, Neo4j plugins, and query optimization. Familiarity with cloud platforms (AWS) and containerization tools (Docker, Kubernetes). Proven experience leading engineering teams or projects. Excellent problem-solving and communication skills. Nice to have N/A Other Languages EnglishC1 Advanced Seniority Senior
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Ciena is committed to our people-first philosophy. Our teams enjoy a culture focused on prioritizing a personalized and flexible work environment that empowers an individual’s passions, growth, wellbeing and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. Position: BPUAA Software Engineer Ciena’s Blue Planet UAA is an open, vendor -agnostic software suite with the latest innovations in AI and ML. The unit has an advanced development team responsible for end-to-end network insight complements the automated service fulfillment capabilities provided by MDSO and NFVO. We are looking for a Software Engineer who has the experience to drive the activities related to given phase/integration setup installation. What will you do as a BluePlanet UAA Software Engineer? Software Engineer will work in BluePlanet’s with project manager, team lead, other leads and team members to provide optimal solutions. Understand the overall project, understand phase-wise requirements Requirements gathering, analysis and define for a given phase/integration Draft and own requirements for an integration with 3rd party tools Draft and own requirements for an entire phase within the project Create work tasks for a given requirement Responsible for quality and timely deliverables Coordinate with project manager, team lead, other leads and team members to provide optimal solutions Review test plan for various phases/integrations Review user guide for various phases/integrations Desired Skills Bachelor’s Degree (CS, EE), or equivalent experience required Master’s Degree, preferred 6+ years of industry experience working on below technologies. Must have: Java On databases such as MySQL, postgres, Oracle or equivalent Data Structures and Algorithm Multithreading Good to have Angular and NodeJS Neo4J Rest API Akka Kafka Docker and Kubernetes Python and Shell scripting Strong working knowledge of agile/waterfall SW methodologies Strong working knowledge on High level and Low Level Software Design. Strong working knowledge on unit and integration test development. Strong working knowledge on Linux environments. Strong written and oral English communication skills Process oriented, self-starter, self-motivated individual Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
0 years
10 - 13 Lacs
Hyderābād
On-site
JD- python, Express JS , neo4j (cypher, APOC) graph, PostgreSQL, Cassandra, GIT, Jenkins write reusable, testable and efficient Python Code create real time and low-latency event processing. hands on experience in express js for building restful APIs with nodejs good knowledge in Graph database (neo4j) write optimized cypher queries and APOC queries for neo4j. Good experience in handling huge data unloads/loads sets. write optimized SQLs for RDBMS (PostgreSQL) and NoSQL (Cassandra) Hands on experience on tools like GIT, Jenkins, putty. Nice to have knowledge on Java and Spring technologies. Skillset: python, Express JS , neo4j (cypher, APOC) graph, PostgreSQL, Cassandra, GIT, Jenkins. Nice to have Java experience Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,300,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person
Posted 2 weeks ago
3.0 - 5.0 years
8 - 11 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
We are hiring for candidates for following candidates Role - Python with Nodejs,expressjs,SQL Only Immediate profiles are considered PF Mandate : good comm skills Exp: 3+Years Location: Bangalore/ Chennai/ Hyderabad JD for python, Express JS , neo4j (cypher, APOC) graph, PostgreSQL, Cassandra, GIT, Jenkins write reusable, testable and efficient Python Code create real time and low-latency event processing. hands on experience in express js for building restful APIs with nodejs good knowledge in Graph database (neo4j) write optimized cypher queries and APOC queries for neo4j. Good experience in handling huge data unloads/loads sets. write optimized SQLs for RDBMS (PostgreSQL) and NoSQL (Cassandra) Hands on experience on tools like GIT, Jenkins, putty. Nice to have knowledge on Java and Spring technologies. Skillset: python, Express JS , neo4j (cypher, APOC) graph, PostgreSQL, Cassandra, GIT, Jenkins. Nice to have Java experience interested candidates, pls share CV on ashwini@anveta.com
Posted 2 weeks ago
5.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Job Description We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles and Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills Education: B.Tech in computer engineering, Information Technology, or related field. Experience: GraphDB experience is must 5+ years of experience in a Technical Support Role p on Data based Software Product at least L3 level. Linux Expertise: 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases: 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise: 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing: 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation: 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration: 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies: Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice to have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
On-site
Job Description We are seeking a highly skilled and customer-focused GraphDB / Neo4J Solutions Engineer to join our team. This role is responsible for delivering high-quality solution implementation to our customers to implement GraphDB based product and collaborating with cross-functional teams to ensure customer success. Solution lead is expected to provide in-depth solutions on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. Solution lead must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). Roles and Responsibilities Collaborate with core engineering, Customers and solution engineering teams for functional and technical discovery sessions . Prepare product and live software demonstrations Create and maintain public documentation, internal knowledge base articles, and FAQs. Ability to design efficient graph schemas and develop prototypes that address customer requirements (e.g., Fraud Detection, Recommendation Engines, Knowledge Graphs). Knowledge of indexing strategies , partitioning , and query optimization in GraphDB. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Education and Experience Education: B.Tech in computer engineering, Information Technology, or related field. Experience: 5+ years of experience in a Solution Lead role on Data based Software Product such as GraphDB, Neo4J Must Have Skills SQL Expertise: 4+ years of experience in SQL for database querying, performance tuning, and debugging. Graph Databases and GraphDB platforms: 4+ years of hands on experience with Neo4j, or similar graph database systems. Scripting & Automation: 4+ years with strong skills in C, C++, Python for automation, task management, and issue resolution. Virtualization and Cloud knowledge : 4+ years with Azure, GCP or AWS. Management skills : 3+ years Experience with data requirements gathering and data modeling, white boarding and developing/validating proposed solution architectures. The ability to communicate complex information and concepts to prospective users in a clear and effective way. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Neo4j, a popular graph database management system, is seeing a growing demand in the job market in India. Companies are looking for professionals who are skilled in working with Neo4j to manage and analyze complex relationships in their data. If you are a job seeker interested in Neo4j roles, this article will provide you with valuable insights to help you navigate the job market in India.
The average salary range for Neo4j professionals in India varies based on experience levels. - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the Neo4j skill area, a typical career progression may look like: - Junior Developer - Developer - Senior Developer - Tech Lead
Apart from expertise in Neo4j, professionals in this field are often expected to have or develop skills in: - Cypher Query Language - Data modeling - Database management - Java or Python programming
As you explore Neo4j job opportunities in India, it's essential to not only possess the necessary technical skills but also be prepared to showcase your expertise during interviews. Stay updated with the latest trends in Neo4j and continuously enhance your skills to stand out in the competitive job market. Prepare thoroughly, demonstrate your knowledge confidently, and land your dream Neo4j job in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2