Jobs
Interviews

573 Solr Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job ID: Clo-ETP-Pun-985 Location: Pune Role Support customer project deployments in AWS, Google Cloud (GCP) and Microsoft Azure Collaborate with development teams to design and implement robust, scalable, and efficient solutions Identify areas for improvement in system performance, reliability, and security Conduct regular performance testing, capacity planning, and optimization activities Maintain comprehensive documentation of system configurations, processes, and procedures Collaborate with development teams to deploy new code to production environment Manage incident response with engineers and clients Work in rotating 24/7 shifts Skills Familiarity with the following technologies: Linux, git, ruby, bash, AWS / Azure / Google Cloud, Kubernetes, MySQL, Solr, Apache Tomcat, Java, Graylog, Kibana, Zabbix, and Datadog Proven troubleshooting and problem-solving skills in a cloud-based application environment Outstanding communication skills with the ability to work in a client-facing role Show more Show less

Posted 2 months ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

The Digital :SAP Hybris Commerce, Digital :SAP Concur Expenses Management role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Digital :SAP Hybris Commerce, Digital :SAP Concur Expenses Management domain.

Posted 2 months ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

The Digital :SAP Hybris Commerce, Digital :Liferay Portal role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Digital :SAP Hybris Commerce, Digital :Liferay Portal domain.

Posted 2 months ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Experience in architecting web applications using Java, Spring Boot and React JS. Highly proficient in Programming languages such as Java, JavaScript, TypeScript. Strong knowledge in Web Development Frameworks and Technologies like Node.JS, React JS or Angular JS. Possess good working knowledge in designing enterprise grade full stack solutions which should be highly performing and secured. Should possess strong working knowledge in design and integration patterns. Should exhibit good working knowledge on RDBMS and NoSQL databases. Also possess sound database design and data storage patterns. Strong knowledge about HTTP, Web Performance, SEO and Web Standards. Good working knowledge about Web Security, Cryptography and Security Compliance like PCI, PII etc. Experience with Scrum/Agile development methodologies. A day in the life of an Infosys Equinox employee: As part of the Infosys Equinox delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. We are looking for experienced Full Stack Architects who want to develop exciting and innovative digital shopping experiences for some of the biggest retail brands in the world. This individual is responsible for delivering the technical solution that satisfies the functional design documents and other useful requirements. The ideal candidate will have a very strong technology background and demonstrated experience in building high-quality enterprise applications with attention to detail which are highly performant and secured. Working experience in an ecommerce implementation will be an added advantage. Apache Camel experience will be an added advantage. Messaging – Proficiency in RabbitMQ/Other messaging frameworks like Kafka, Active MQ Experience in Apache SOLR. Experience in IaaS Cloud architecture design. Third party API integration experience. Show more Show less

Posted 2 months ago

Apply

0 years

2 - 9 Lacs

Chennai

On-site

Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases.

Posted 2 months ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Technical Skills: Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less

Posted 2 months ago

Apply

4.0 - 6.0 years

12 - 21 Lacs

Hyderabad

Work from Office

Role : Vertex AI Developer Location : Hyderabad / Chennai (Hybrid) Roles and Responsibilities Roles and Responsibilities: Must have-Spring Reactive, Microservices, Java , GCP, AWS, LucidWorks Fusion, Apache Solr, Python, React.js, Google Vertex Search skill sets. Designing, developing, and deploying scalable and efficient microservices using Java and related technologies. Collaborating with cross-functional teams, including product managers, architects, and other developers, to define and implement microservices solutions. Writing clean, maintainable, and testable code following best practices and design patterns. Ensuring the performance, scalability, and reliability of the microservices by conducting thorough testing and optimization. Integrating microservices with other systems and third-party APIs to enable seamless data exchange. Monitoring, troubleshooting, and resolving issues in the microservices architecture to ensure high availability and performance. Good working knowledge of design pattern and good understanding of software development life cycle (SDLC). Critical Skills to Possess: 3+ years of work experience with web applications. Experience designing microservices using Spring, Spring Boot, Spring Cloud. Experience in both relational and NoSQL database – MySQL, Couchbase. Experience writing unit test(jUnit) cases during application development. Experience with Jenkins for build and deployment job and an understanding of CI/CD. "" Preferred Qualifications: Bachelor’s degree in computer science or a related field (or equivalent work experience)

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Looking for a dynamic senior React Developer to work as part of one of the top engineering companies in the world, handling some of the most complex scenarios. In this role, you will be building the UI for Apple Maps. APPLY ONLY IF YOU CAN JOIN BY 10th June 2025 Basic qualifications: Min 5 years of experience in front-end development primarily using React Js framework Strong hands-on experience in building UI with React, Javascript, Typescript Knowledge of Solr and Elastic Search queries is a plus Should have an eye to build beautiful UI with good user experience Experience working with GraphQL, and complex APIs to build data-driven UI applications Should have knowledge of Webpack Should possess excellent troubleshooting skills. Please share your resume, and our team will get back to you with more details. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Role: Senior Java Engineer Location: Pune, India (Hybrid - 3 days a week in office) Experience: 8 - 12 Years Shift Timing: 9 AM - 5 PM general shift. Interview Rounds: Virtual (4 Rounds): 2 Internal + 2 Client. Mode of Work: Hybrid - 3 days a week in office. Office Location: Yerwada, Pune. Job Positions: 2. About Us We’re proud to be one of New York City’s fastest-growing product engineering consulting firms, dedicated to driving innovation and scalable growth for our clients. With eight consecutive years on the Inc. 5000 list of America’s Fastest-Growing Companies, we’ve earned a place in the elite Inc. 5000 Hall of Fame — an honor reserved for the top 1% of high-growth companies nationwide . What We Do We specialize in rapidly bringing our clients' most critical and strategic products to market — with high velocity, exceptional quality, and 10x impact. By embedding modern tools, proven methodologies, and forward-thinking leadership, we help build innovative, high-performing teams that thrive in today’s fast-paced digital landscape. This is a unique opportunity to join a dynamic and evolving team. Our client roster includes industry leaders such as Goldman Sachs, Fidelity, Morgan Stanley, and Mastercard. From greenfield innovations to tier-one product builds, our teams lead the delivery of mission-critical projects across product strategy, design, cloud-native applications, and both mobile and web development. The work we do shapes industries — and transforms the way people live, work, and think. About the Role: Senior Java Engineer As a Senior Java Engineer, you will collaborate with lead-level and fellow senior-level engineers to architect and implement solutions that maximize client offerings. In this role, you will develop performant and robust Java applications while continuously evaluating and advancing web technologies within the organization. Responsibilities:- Work on a high-velocity scrum team. Collaborate with clients to devise solutions for real-world problems. Architect and implement scalable end-to-end Web applications. Support the team lead in facilitating development processes. Provide estimates and milestones for features/stories. Work with your mentor for personal learning and growth, and mentor less experienced engineers. Contribute to the growth of it through interviewing and architectural contributions. Qualifications (Core Requirements) 5+ years of Java development within an enterprise-level domain. Proficiency with Java 8 (Java 11 preferred) features such as lambda expressions, Stream API, Completable Future, etc. Skilled in low-latency, high-volume application development. Expertise in CI/CD and shift-left testing. Nice to have: Golang and/or Rust. Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot. Proficiency with SQL. Experience with data sourcing, data modeling, and data enrichment. Experience with Systems Design & CI/CD pipelines. Cloud computing, preferably AWS. Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability). MongoDB. Sonar. Jenkins. Oracle DB, Sybase IQ, DB2. Drools or any rules engine experience. CMS tools like Adobe AEM. Search tools like Algolia, ElasticSearch, or Solr. Spark. What Makes You Stand Out From The Pack Payments or Asset/Wealth Management experience. Mature server development and knowledge of frameworks, preferably Spring. Enterprise experience working and building enterprise products, long-term tenure at enterprise-level organizations, experience working with a remote team, and being an avid practitioner in their craft. You have pushed code into production and have deployed multiple products to market, but are seeking the visibility of a small team within a large enterprise technology environment. You enjoy coaching junior engineers, but want to remain hands-on with code. Open to hybrid work - 3 days per week from the office. Must-Haves Mandatory: Core Java, SOLID Principles, Multithreading, Design patterns. Spring, Spring Boot, Rest API, Microservices. Kafka, Messaging/streaming stack. JUnit. Code Optimization, Performance Design, Architecture concepts. Database and SQL. CI/CD - Understanding of Deployment, Infrastructure, Cloud. No gaps in organization. No job hoppers (candidate must have good stability). Joining time/notice period: Immediate to 30 days. Nice To Haves Good to have: Network Stack - gRPC, HTTP/2 etc. Security Stack (OWASP, OAuth, encryption). Good Communication. Agile. Skills: elasticsearch,junit,ci/cd,data sourcing,high-volume application development,spring, spring boot, rest api, microservices.,messaging/streaming stack,asynchronous programming,multithreading,kafka,code optimization,spark,sybase iq,db2,cloud computing,solr,data enrichment,algolia,sql,solid principles,spring,aws,database,rust,java 8/11,core java,golang,rest api,spring boot,java,sonar,design patterns,systems design,performance design,cms tools,java 8 / java 11,data modeling,messaging/streaming,core java, solid principles, multithreading, design patterns,oracle db,search tools,drools,jenkins,kafka, messaging/streaming stack.,mongodb,microservices,adobe aem,cloud computing (aws),database and sql,architecture concepts,low-latency application development Show more Show less

Posted 2 months ago

Apply

4.0 - 6.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

About This Role BlackRock’s Aladdin Wealth Tech organization (AWT) is at the centre of cutting-edge technologies that serve financial advisors and investors. AWT consolidates BlackRock’s technology offering in Retail Wealth with the goal to provide highly functional tools and services that can help clients manage their book of business. This Associate requires solid hands-on programming, creative problem solving, and troubleshooting skills with an enthusiasm for learning. This role also offers the opportunity to partner with proficient software engineers to continue to build both new systems and enhance existing ones. Responsibilities Designing, coding, testing, and supporting reliable, robust software applications and services with high-quality standards. Participating actively in multi-functional feature requirements gathering, design, and implementation. Collaborating with product management on the right units of work to include in each development sprint. Performing code reviews and providing timely feedback to other engineers. Desired Skills Proficient in backend/ Middleware/ Platform development. 4-6 years of experience and strong hands-on with Java 8 or later. Strong in understanding of Java-based frameworks such as Spring (Core, boot, MVC, ORM, JDBC) and should be able to design distributed and scalable applications. Exposure to NOSQL DB like Cassandra, MongoDB, along with Solr/ElasticSearch would be a big plus. Experience with scripting languages like Shell, Perl, Python. Experience with SQL , experience on RDBMS like Sybase/MSSQL etc. Some experience (could be academic) with cloud platforms such as Amazon Web Services (AWS) or Microsoft Azure Experience in developing in microservices architecture and/or writing APIs. Exposure with build tools (like Maven), Git, Splunk, JIRA. Passion for finance and knowledge of the portfolio management space. Proficiency building and passionate about creating great products that solve big problems. Strong teamwork, interpersonal skills and time management abilities. Strong analytical skills and passion to understand existing systems. Good to Have but not Mandatory Exposure to Spark and Scala Qualifications B.Tech/MCA in Computer Science or related technical field, or equivalent experience 4-6 years of professional software development experience. Strong object-oriented programming knowledge along with good understanding of algorithms, data structures and design patterns. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 2 months ago

Apply

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and deliver ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. Product Manager - Content Curator Live What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a detail-oriented and research-savvy Content Curator to support our enterprise Search Program within the pharmaceutical sector. This role is critical to improving how scientists, researchers, clinicians, and business teams discover relevant, accurate, and well-structured information across vast internal and external data sources. You will curate, classify, and optimize content to ensure it is accessible, contextual, and aligned with regulatory standards. Curate scientific, clinical, regulatory, and commercial content for use within internal search platforms. Sourcing and aggregating relevant content across various platforms. Ensure high-value content is properly tagged, described, and categorized using standard metadata and taxonomies. Identify and fill content gaps based on user needs and search behavior. Organizing and scheduling content publication to maintain consistency. Analyzing content performance and making data-driven decisions to optimize engagement Provide feedback and input on synonym lists, controlled vocabularies, and NLP enrichment tools Apply and help maintain consistent metadata standards, ontologies, and classification schemes (e.g., MeSH, SNOMED, MedDRA). Work with taxonomy and knowledge management teams to evolve tagging strategies and improve content discoverability. Capture and highlight the best content from a wide range of topics Stay up-to-date on best practices and make recommendations for content strategy Edit and optimize content for search engine optimization Perform quality assurance checks on all content before publication Identify and track metrics to measure the success of content curation efforts Review and curate content from a wide variety of categories with a focus Understanding of fundamental data structures and algorithms Understanding how to optimize content for search engines is important for visibility. Experience in identifying, organizing, and sharing content. Ability to clearly and concisely communicate complex information. Ability to analyze data and track the performance of content. Ability to quickly adapt to changing information landscapes and find new resources. A deep understanding of Google Cloud Platform services and technologies is crucial and will be an added advantage Check and update digital assets regularly and, if needed, modify their accessibility and security settings Investigate, secure, and properly document permission clearance to publish data, graphics, videos, and other media Develop and manage a system for storing and organizing digital material Convert collected assets to a different digital format and discard the material that is no longer relevant or needed Investigate new trends and tools connected with the generation and curation of digital material Basic Qualifications: Degree in Data Management, Mass communication and computer science & engineering preferred with 9-12 years of software development experience 5+ years of experience in (digital) content curation or a related position Excellent organizational and time-management skills. Ability to analyze data and derive insights for content optimization. Familiarity with metadata standards, taxonomy tools, and content management systems. Ability to interpret scientific or clinical content and structure it for digital platforms. Ability to analyze data and derive insights for content optimization. Exceptional written and verbal communication skills. Experience in Content Management Systems (CMS), SEO, Google Analytics, GXP Search Engine/ Solr Search, enterprise search platforms, data bricks Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Exceptional written and verbal communication skills. Excellent organizational and time-management skills. Preferred Qualifications: Experience with enterprise search platforms (e.g., Lucene, Elasticsearch, Coveo, Sinequa). Experience with GCP Cloud/AWS cloud /Azure Cloud Experience GXP Search Engine/ Solr Search Experience in Posgres SQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, Dynamo DB, S3 Experience in Agile software development methodologies Good To Have Skills Willingness to work on AI Applications Experience with popular large language models Experience with Langchain or llamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global teams. High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What You Can Expect From Us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Now let’s help you understand the role !! About your new role!! ● You'll work with Android/web developers to develop backend services that meet their needs ● Identify libraries and technologies that solve our problems and/or are worth experimentation ● Verify user feedback in making system more stable and easy ● Learn and use core AWS technologies to design and then build available and scalable backend web services and customer-facing APIs. ● Experience in agile methodologies like Scrum Good understanding of branching, build, deployment, continuous integration methodologies What Makes You A Great Fit : ● Experience working on scalable, high availability applications/ services. ● Good understanding of data structures, algorithms and design patterns. ● Excellent analytical and problem-solving skills. ● Hands on experience in Python and familiarity with at least one framework (Django, Flask etc) ● Good exposure in writing and optimising SQL(such as PostgreSQL) for high-performance systems with large databases. ● Understanding of message queues, pub-sub, and in-memory data stores like Memcache / Redis ● Experience with NoSQL and distributed databases like MongoDB, Cassandra etc. ● Comfortable with search engines like ElasticSearch or SOLR. Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Role: Drupal Developer Location: Juhi Nagar, Navi Mumbai (Work from Office – Alternate Saturdays will be working) Experience: 4+ years Joining: Immediate Joiners Only Work Mode: This is a Work from Office role. Work Schedule: Alternate Saturdays will be working. About company: It is an innovative technology company focused on delivering robust web solutions. (Further company details would typically be inserted here once provided by the client.) We are looking for talented individuals to join our team and contribute to cutting-edge projects. The Opportunity: Drupal Developer We are seeking an experienced and highly skilled Drupal Developer to join our team. The ideal candidate will have a strong understanding of Drupal's architecture and a proven track record in developing custom modules, implementing sophisticated theming, and integrating with various APIs. This is a hands-on role for an immediate joiner who is passionate about building secure, scalable, and high-performance Drupal applications. Key Responsibilities Develop and maintain custom Drupal modules using Hooks, Plugin system, Form API, and Entity API. Implement and work with REST, JSON:API, and GraphQL within Drupal for seamless data exchange. Design and implement Drupal themes using Twig templating engine and preprocess functions to ensure a consistent and engaging user experience. Configure and manage user roles and access control to maintain application security and data integrity. Apply best practices in securing Drupal applications, identifying and mitigating potential vulnerabilities. Integrate Drupal with various third-party APIs and external systems. Collaborate with cross-functional teams to define, design, and ship new features. Contribute to all phases of the development lifecycle, from concept to deployment and maintenance. Requirements Experience: 4+ years of professional experience in Drupal development. Custom Module Development: Strong understanding and hands-on experience with custom module development (Hooks, Plugin system, Form API, Entity API). API Integration (Drupal): Proficiency with REST / JSON:API / GraphQL in Drupal. Drupal Theming: Experience with Drupal theming using Twig and preprocess functions. Security & Access Control: Experience with user roles and access control, and a strong understanding of best practices in securing Drupal applications. Third-Party Integration: Familiarity with APIs and third-party integration. Joining: Immediate Joiners Only. Preferred Experience Experience with Rocket.Chat integration or other messaging tools. Exposure to Solr/Elasticsearch using Drupal Search API. Skills: rocket.chat integration,api integration,security,drupal development.,hooks,api integration (drupal),custom module development,json:api,form api,drupal theming,plugin system,third-party integration,graphql,drupal,rest,preprocess functions,entity api,twig,access control Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

What is your Role? You will work in a multi-functional role with a combination of expertise in System and Hadoop administration. You will work in a team that often interacts with customers on various aspects related to technical support for deployed system. You will be deputed at customer premises to assist customers for issues related to System and Hadoop administration. You will Interact with QA and Engineering team to co-ordinate issue resolution within the promised SLA to customer. What will you do? Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem. Installing Linux Operating System and Networking. Writing Unix SHELL/Ansible Scripting for automation. Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc. Takes care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time. Maintaining HBASE Clusters and capacity planning. Maintaining SOLR Cluster and capacity planning. Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected. Manage KVM Virtualization environment. What skills you should have? Technical Domain: Linux administration, Hadoop Infrastructure and Administration, SOLR, Configuration Management (Ansible etc). Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

We are looking for a Senior Java Backend Developer with expertise in Spring Boot, microservices, and cloud technologies. The ideal candidate will have experience in building scalable web applications, working with AWS, and integrating with third-party services like Kafka and RabbitMQ. Key Responsibilities: Design, develop, and maintain backend solutions using Java and Spring Boot. Build and manage microservices and distributed systems. Integrate with third-party services (e.g., OAuth, cloud APIs, message brokers like Kafka and RabbitMQ). Ensure system performance, scalability, and reliability through proper design and architecture. Work with MongoDB and other databases to manage and optimize data storage. Deploy and manage services on AWS cloud infrastructure. Collaborate with front-end teams and stakeholders to deliver high-quality web applications. Participate in the SDLC, ensuring best practices are followed in code development and testing. Required Skills: 4+ years of experience in Java backend development. Proficiency in Spring Boot and RESTful APIs. Experience with microservices architecture and AWS. Hands-on experience with Kafka, RabbitMQ, and MongoDB. Familiarity with the full SDLC and agile methodologies. Strong problem-solving skills and ability to optimize system performance. Preferred Skills: Knowledge of ElasticSearch, Solr, Docker, and CI/CD pipelines. Familiarity with security best practices for web apps. About Us: Entire Globe Allied Pvt. Ltd provides services in the IT & BPO industry. Offering our services globally and connecting all to the world of innovation. We believe in providing the best solutions to our clients keeping Customer’s satisfaction and brand’s reputation in mind. We are in the business of outsourcing services, providing complete business solutions for Start-ups, Small and Medium Businesses and currently expanding reach towards large enterprises. We at EG Allied engage ourselves with innovative ideas to get competitive advantage over global competition. Contact Us: E-Mail: info@egallied.com Website: www.egallied.com To know more about us visit our website Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About OnGrid We are India's fastest-growing digital trust platform offering services such as background verifications, reference checks, employee/staff onboarding, etc. We have completed more than 500 million checks across 3,000+ happy clients, and since its inception, the company has shown continuous upward growth in an ever-changing business environment. As an organization, we are self-sustainable with positive cash flows. At OnGrid, we are focused on redefining and reimagining Trust. We are leveraging APIs to build a digital trust platform, all while being accountable for providing verified credentials instantly coming from the source directly. Having built the basic pillars of trust, we now want our imaginations to be let loose and think of avenues not explored and ways never implemented before. In this pursuit, we are looking for a motivated Staff Engineer (Backend) with experience in building high-performing, scalable, enterprise-grade applications to join us, driving this vision, and taking it to a larger scale. If you are a visionary, always on the lookout for the right solutions, and a technology geek, constantly seeking to learn and improve your skillset, then you are the type of person we are looking for. Roles & Responsibilities : - Develop new user-facing features. - Work closely with the product to understand our requirements, and design, develop, and iterate through the complex architecture. - Writing clean, reusable, high-quality, high-performance, maintainable code. - Encourage innovation and efficiency improvements to ensure processes are productive. - Ensure the training and mentoring of the team members. - Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed. - Research and apply new technologies, techniques, and best practices. - Team mentorship and leadership. Requirements : - Bachelor's/Master's in Computer Science or a related field. - 6-8 years of prior relevant experience. - Experience with web technologies and microservices architecture. - Java, Spring framework. - MySQL, Mongo, Solr, Redis,. - Kubernetes, Docker. - Excellent teamwork skills, flexibility, and ability to handle multiple tasks. - Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story. - Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services. - Exceptional design and architectural skills. - Experience with cloud providers/platforms like GCP and AWS. Show more Show less

Posted 2 months ago

Apply

1.0 - 5.0 years

8 - 12 Lacs

Mumbai

Work from Office

We're HiringDrupal Developer with React! We are looking for an experienced Drupal Developer with 3-5 years of expertise in Drupal development, customisation, and integration The ideal candidate will work on building, maintaining, and optimising Drupal-based applications, ensuring scalability, security, and performance. “ LocationMumbai Suburban, India Work ModeWork From Office ’ RoleDrupal Developer with React What You'll Do Develop, customize, and maintain Drupal 8/9/10 applications. Work on custom module development and theme customization Develop and Integrate Headless Drupal solutions Implement Drupal site-building techniques, including Views, Blocks, and Entity API. Implement React-storybook components Handle content migration and upgrades between Drupal versions. Optimize site performance, caching strategies, and security measures. Troubleshoot and debug Drupal applications efficiently. Integrate third-party APIs Collaborate with UI/UX designers and front-end developers for seamless implementation Manage Drupal Configuration Management and deployment processes Work within an Agile environment, write story points, Implementation notes, Risk/assumptions to ensure smooth project execution Collaborate with Testing team for quality product deliveries Promote Best practices and efficient workflow Conduct code reviews, performance tuning, and debugging to ensure high-quality releases Manage large scale projects What Were Looking For 4-6 years of hands-on experience in Drupal development. Strong proficiency in custom module/theme development. Strong proficiency in PHP, MySQL, JavaScript, jQuery, and AJAX. Strong proficiency in ReactJS-storybook component development. Experience of consuming/creating RESTful APIs. Experience with Symphony framework, Twig templating and front-end technologies. Experience in Algolia, SOLR search engine. Familiarity with Composer, Drush, and Drupal CLI. Experience in Drupal API, Hooks, Plugins, and Services. Experience in Acquia/AWS cloud, Varnish, memcached. Experience with Git version control and CI/CD pipelines. Exposure to Drupal caching, performance tuning, and security best practices. Knowledge of Figma & Adobe XD tools. Strong problem-solving and debugging skills. Ability to work independently and as part of a team. Excellent communication and collaboration skills. Preferred Qualifications Experience with Headless Drupal & Decoupled architectures. Knowledge of Acquia Cloud or other cloud hosting platforms. Familiarity with Docker, Docksal, DDEV Understanding of Agile methodologies and Jira workflows. Good To Have Experience in WordPress development Knowledge of WAF, SSL, Networking Knowledge of CI/CD pipelines and automate deployment processes Contribution in Drupal.org community Acquia Certification in Drupal KPI- Maintain clean, efficient, and well-documented code with minimal defects Complete assigned tasks within deadlines while meeting project requirements Work effectively with teams, providing clear updates and feedback Identify and resolve issues efficiently, contributing to process improvements Stay updated with industry trends and enhance skills through training and certifications comply with security standards Ready to take your career to the next levelš" Apply now and lets build something amazing together! Show more Show less

Posted 2 months ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Job Title : Lead Data Scientist Job Location : Pune Job summary: HiLabs is looking for highly motivated and skilled Lead/Sr. Data Scientist focused on the application of emerging technologies. The candidates must be well versed with Python, Scala, Spark, SQL and AWS platform. The individuals who will join the new Evolutionary Platform team should be continually striving to advance AI/ML excellence and technology innovation. The mission is to power the next generation of the digital product and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast- paced environment. Responsibilities Leverage AI/ML techniques and solutions to identify and mathematically interpret complex healthcare problems. Full-stack development of data pipelines involving Big Data. Design and development of robust application/data pipelines using Python, Scala, Spark, and SQL Lead a team of Data Scientists, developers as well as clinicians to strategize, design and evaluate AI based solutions to healthcare problems. Increase efficiency and improve the quality of solutions offered. Managing the complete ETL pipeline development process from conception to deployment Collaborating with and guiding the team on writing, building, and deployment of data software Following best design and development practices to ensure high quality code. Design, build and maintain efficient, secure, reusable, and reliable code Perform code reviews, testing, and debugging Desired Profile Bachelor's or Master’s degrees in computer science, Mathematics, or any other quantitative discipline from Premium/Tier 1 institutions 5 to 7 years of experience in developing robust ETL data pipelines and implementing advanced AI/ML algorithms (GenAI is a plus). Strong experience working with technologies like Python, Scala, Spark, Apache Solr, MySQL, Airflow, AWS etc. Experience working with Relational databases like MySQL, SQLServer, Oracle etc. Good understanding of large system architecture and design Understands the core concepts of Machine Learning and the math behind it. Experience working in AWS/Azure cloud environment Experience using Version Control tools such as Bitbucket/GIT code repository Experience using tools like Maven/Jenkins, JIRA Experience working in an Agile software delivery environment, with exposure to continuous integration and continuous delivery tools Great collaboration and interpersonal skills Ability to work with team members and lead by example in code, feature development, and knowledge sharing HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Summary Responsible for designing, building, delivering and maintaining software applications & services. Responsible for software lifecycle including activities such as requirement analysis, documentation/procedures and implementation. Job Description Roles and Responsibilities In This Role, You Will Collaborate with system engineers, frontend developers and software developers to implement solutions that are aligned with and extend shared platforms and solutions Apply principles of SDLC and methodologies like Lean/Agile/XP, CI, Software and Product Security, Scalability, Documentation Practices, refactoring and Testing Techniques Writes codes that meets standards and delivers desired functionality using the technology selected for the project Build features such as web services and Queries on existing tables Understand performance parameters and assess application performance Work on core data structures and algorithms and implement them using language of choice Education Qualification Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with basic experience. Desired Characteristics Technical Expertise Experience-3+ Years Frontend - Angular & React, .NET(Mandatory) Backend - Talend ETL tool (Mandatory) Search Engine - Solr DB - Microsoft SQL Server, Postgres Build & Deployment tools - Jenkins, Cruise control, Octopus Aware of methods and practices such as Lean/Agile/XP, etc. Prior work experience in an agile environment, or introductory training on Lean/Agile. Aware of and able to apply continuous integration (CI). General understanding of the impacts of technology choice to the software development life cycle. Business Acumen Has the ability to break down problems and estimate time for development tasks. Understands the technology landscape, up to date on current technology trends and new technology, brings new ideas to the team. Displays understanding of the project's value proposition for the customer. Shows commitment to deliver the best value proposition for the targeted customer. Learns organization vision statement and decision making framework. Able to understand how team and personal goals/objectives contribute to the organization vision Personal/Leadership Attributes Voices opinions and presents clear rationale. Uses data or factual evidence to influence. Learns organization vision statement and decision making framework. Able to understand how team and personal goals/objectives contribute to the organization vision. Completes assigned tasks on time and with high quality. Takes independent responsibility for assigned deliverables. Has the ability to break down problems and estimate time for development tasks. Seeks to understand problems thoroughly before implementing solutions. Asks questions to clarify requirements when ambiguities are present. Identifies opportunities for innovation and offers new ideas. Takes the initiative to experiment with new software frameworks Adapts to new environments and changing requirements. Pivots quickly as needed. When coached, responds to need & seeks info from other sources Write code that meets standards and delivers desired functionality using the technology selected for the project. Inclusion and Diversity GE HealthCare is an Equal Opportunity Employer where inclusion Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. Additional Information Relocation Assistance Provided: No Show more Show less

Posted 2 months ago

Apply

0.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 83482 Date: May 27, 2025 Location: Delhi CEC Designation: Consultant Entity: E2E knowledge of Hybris Order Management and Hybris Data Hub integration. Experience in Spring, Hibernate and other JEE technologies frameworks. Experience in troubleshooting and bug fixing issues. Experience in X/HTML, CSS, JavaScript, XML and SQL. Understanding of current Hybris system implementation and customization - Hybris Core, Hybris eCommerce. Experience in Integration of Hybris using OCC Web services with systems like SAP Hybris / SAP Commerce Cloud Marketing, C4C and Data Hub. Good knowledge of both BC and BB.B Java JEE SpringC HTML, CSS and JavaScript, JQueryD Eclipse, Ant Maven. Experience in J2EE, Spring MVC, JSP, integrating various payment providers, SVN,GIT, B2C /B2B Hybris accelerator. Knowledge on Hybris ORM, WCMS, Backoffice, Cockpits,SOLR search engine. Expert in Hybris B2B, B2C Accelerators, Hybris Workflow and Task management. Expert in the catalog, order management, promotions, vouchers and coupons. Experience in working with JSPs, Java scripts, JQuery. Knowledge of SOLR or similar search technology. Experience in Source control using GIT etc.

Posted 2 months ago

Apply

0.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 83483 Date: May 27, 2025 Location: Delhi CEC Designation: Analyst Entity: E2E knowledge of Hybris Order Management and Hybris Data Hub integration. Experience in Spring, Hibernate and other JEE technologies frameworks. Experience in troubleshooting and bug fixing issues. Experience in X/HTML, CSS, JavaScript, XML and SQL. Understanding of current Hybris system implementation and customization - Hybris Core, Hybris eCommerce. Experience in Integration of Hybris using OCC Web services with systems like SAP Hybris / SAP Commerce Cloud Marketing, C4C and Data Hub. Good knowledge of both BC and BB.B Java JEE SpringC HTML, CSS and JavaScript, JQueryD Eclipse, Ant Maven. Experience in J2EE, Spring MVC, JSP, integrating various payment providers, SVN,GIT, B2C /B2B Hybris accelerator. Knowledge on Hybris ORM, WCMS, Backoffice, Cockpits,SOLR search engine. Expert in Hybris B2B, B2C Accelerators, Hybris Workflow and Task management. Expert in the catalog, order management, promotions, vouchers and coupons. Experience in working with JSPs, Java scripts, JQuery. Knowledge of SOLR or similar search technology. Experience in Source control using GIT etc.

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About OnGrid We are India's fastest-growing digital trust platform offering services such as background verifications, reference checks, employee/staff onboarding, etc. We have completed more than 500+ million checks across 3000+ happy clients and since its inception, the company has shown continuous uptrend growth in an ever-changing business environment. As an organization, we are self-sustainable with positive cash flows. At OnGrid, we are focused on redefining and reimagining Trust. We are leveraging APIs to build a digital trust platform all while being accountable for providing verified credentials instantly coming from the source directly. Having built the basic pillars of trust, we now want our imaginations to be let loose and think of avenues not explored and ways never implemented before. In this pursuit, we are looking for a motivated Staff Engineer (Backend) with experience in building high-performing, scalable, enterprise-grade applications, to join us, driving this vision, and taking it to a larger scale. If you are a visionary, always on the lookout for the right solutions, and a technology geek, constantly seeking to learn and improve your skillset, then you are the type of person we are looking for. Roles & Responsibilities Develop new user-facing features. Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture. Writing clean, reusable, high-quality, high-performance, maintainable code. Encourage innovation and efficiency improvements to ensure processes are productive. Ensure the training and mentoring of the team members. Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed. Research and apply new technologies, techniques, and best practices. Team mentorship and leadership. Requirements Bachelors/Masters in Computer Science or a related field. 6 10 years of prior relevant experience. Experience with web technologies and microservices architecture. Java, Spring framework. MySQL, Mongo, Solr, Redis,. Kubernetes, Docker. Excellent teamwork skills, flexibility, and ability to handle multiple tasks. Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story. Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services. Exceptional design and architectural skills. Experience with cloud providers/platforms like GCP and AWS. (ref:hirist.tech) Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

About This Role About the Role BlackRock’s Aladdin Wealth Tech organization (AWT) is at the center of cutting-edge technologies that serve financial advisors and investors. AWT consolidates BlackRock’s technology offering in Retail Wealth with the goal to provide highly functional tools and services that can help clients manage their book of business. This Associate requires solid hands-on programming, creative problem solving, and troubleshooting skills with an enthusiasm for learning. This role also offers the opportunity to partner with proficient software engineers to continue to build both new systems and enhance existing ones. Responsibilities Designing, coding, testing, and supporting reliable, robust software applications and services with high-quality standards. Participating actively in multi-functional feature requirements gathering, design, and implementation. Collaborating with product management on the right units of work to include in each development sprint. Performing code reviews and providing timely feedback to other engineers. Desired Skills Proficient in backend/ Middleware/ Platform development. 5+ Experience and strong hands-on with Java 8 or later. Strong in understanding of Java-based frameworks such as Spring (Core, boot, MVC, ORM, JDBC) and should be able to design distributed and scalable applications. Strong Experience in developing in microservices architecture and/or writing APIs. Exposure to NOSQL DB like Cassandra, MongoDB, along with Solr/ElasticSearch would be a big plus. Experience with scripting languages like Shell, Perl, Python. Experience with SQL , experience on RDBMS like Sybase/MSSQL etc. Some experience (could be academic) with cloud platforms such as Amazon Web Services (AWS) or Microsoft Azure Exposure with build tools (like Maven), Git, Splunk, JIRA. Strong verbal and written communication skill. Passion for finance and knowledge of the portfolio management space. Proficiency building and passionate about creating great products that solve big problems. Strong teamwork, interpersonal skills and time management abilities. Strong analytical skills and passion to understand existing systems. Qualifications B.Tech/MCA in Computer Science or related technical field, or equivalent experience 5+ years of professional software development experience. Strong object-oriented programming knowledge along with good understanding of algorithms, data structures and design patterns. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies