Home
Jobs

1245 Elasticsearch Jobs - Page 42

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Scope Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product. BlueYonder is seeking a An Architect in the Data Services department to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services. This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India. Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools Our Current Technical Environment Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD) Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite What You’ll Do Lead the next gen Supply chain single source of architecture Work with internal product , professional services and customers on solutioning, understanding requirements and getting deliverable done with help of geographically spread teams Drive architecture and designs to become simpler, more robust, and more efficient. Write and review service descriptions including relevant measures of service quality and drives architecture to deliver on these promises through self-healing, reliable services that require minimum manual intervention. What We Are Looking For 10+ Yrs of Demonstrable experience with microservices based architecture on Cloud at scale. Experience with Big data technologies and databases like Snowflake, Scala, Spark etc Experience with implementation of Event driven architecture using Kafka, Spark or similar technologies Hands-on development skills along with architecture/design experience; should not have moved away from software development Demonstrable experience, thorough knowledge and interests in Cloud native architecture, Distributed micro-services, Multi-tenant SaaS solution and Cloud Scalability, performance and High availability Experience with API management platforms & providing / consuming RESTful APIs Experience with varied tools such as Spring Boot, OAuth, REST, GraphQL, Hibernate, NoSQL, RDBMS, Docker, Kubernetes, Kafka, React. Experience with DevOps, Infrastructure as Code and infrastructure automation. Good Understanding of secure architectures, secure configuration, identity management Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

5+ years of experience as a Data Analyst or similar role. Proven track record of collecting, cleaning, analyzing, and interpreting large datasets Expertise in Pipeline designing and Validation Expertise in statistical methods, machine learning techniques, and data mining techniques Proficiency in SQL, Python, PySpark, Looker, Prometheus, Carbon, Clickhouse, Kafka, HDFS and ELK stack (Elasticsearch, Logstash, and Kibana) Experience with data visualization tools such as Grafana and Looker Ability to work independently and as part of a team Problem-solving and analytical skills to extract meaningful insights from data Strong business acumen to understand the implications of data findings Collect, clean, and organize large datasets from various sources Perform data analysis using statistical methods, machine learning techniques, and data visualization tools Identify patterns, trends, and anomalies within datasets to uncover insights Develop and maintain data models to represent the organization's business operations Create interactive dashboards and reports to communicate data findings to stakeholders Document data analysis procedures and findings to ensure knowledge transfer 5+ years of experience as a Data Analyst or similar role. Proven track record of collecting, cleaning, analyzing, and interpreting large datasets Expertise in Pipeline designing and Validation Expertise in statistical methods, machine learning techniques, and data mining techniques Proficiency in SQL, Python, PySpark, Looker, Prometheus, Carbon, Clickhouse, Kafka, HDFS and ELK stack (Elasticsearch, Logstash, and Kibana) Experience with data visualization tools such as Grafana and Looker Ability to work independently and as part of a team Problem-solving and analytical skills to extract meaningful insights from data Strong business acumen to understand the implications of data findings Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

Overview: The MongoDB / Elasticsearch / Linux Sysadm / Ansible Engineer supports various database and automation in a 24/7 Linux environment. A good candidate has knowledge that could be gained with one of the following: 2+ years of experience with either MongoDB or Elasticsearch application support, DBA or development 3+ years of experience with other database application support or DBA or development 2+ years of Linux system administration 2+ years of Linux system automation We will train for required expertise in database application support, Linux system administration and other technologies as needed. Responsibilities include issue resolution for database application alerts, upgrades, patching, security, data restorals and replication, backups and performance tuning. They also include operating system and network troubleshooting. In addition to direct customer support, support engineers work with other teams such as DBAs, SREs, account managers and developers as needed. Daily activities include responding to customer requests (tickets and chats) and monitoring alerts, monitoring the production environment, providing acceptable system performance, and assuring data are protected and recoverable as required. Support Data Engineer IIs own moderate to complex customer issues which may take multiple days to resolve fully. This position is for shift 2 and shift 3 in a support engineer team that provides 24/7 operations support for MongoDB, Elasticsearch and other database services in a Linux environment. Due to the 24x7 operations of the business support engineers must be able to work a flexible work schedule, which will include weekends, holidays, occasional nights and emergency escalations. Support engineers take part in on-call rotation during regular shift hours. Important Skills And Experience Excellent troubleshooting skills with ability to resolve issues quickly and effectively Experience managing MongoDB, Elasticsearch or other database configurations Good understanding of Linux operating system, especially debian and CentOS based distributions Ability to work well in teams with good oral, written, and interpersonal skills Ability to communicate technical details and ideas and write documentation Ability to work independently as part of a remote team Useful Skills And Experience Excellent understanding of MongoDB or Elasticsearch database concepts and structures Proven experience implementing and supporting enterprise database systems Scripting experience with shell scripting or Python Configuration management and orchestration experience with Ansible, Puppet or Chef Exposure to a broad range of technologies, especially Redis, k8s, Postgresql, Hadoop, and Kafka About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 31.0 years

1 - 1 Lacs

Gurgaon/Gurugram

Remote

Apna logo

Role Summary Red Orange Infotech is seeking experienced Data Engineers to join our team in Gurugram, India. In this role, you will design and implement robust, scalable data architectures that handle real-time and batch data processing using advanced open-source and cloud-native technologies. You will work closely with cross-functional teams to build data pipelines, manage distributed systems, and optimize data access patterns to support analytics and business intelligence initiatives. This is a full-time position requiring 5-10 years of experience, with a 5-day work-from-office commitment in Gurugram. Key Responsibilities· · Design, build, and maintain scalable ETL/ELT pipelines for both structured and semi-structured data. · Develop data ingestion and streaming pipelines using Kafka, Kafka Connect, and Debezium for real-time CDC (Change Data Capture). · Implement data lake solutions and manage Apache Iceberg tables for large-scale, immutable data storage. · Integrate and optimize Trino or PostgreSQL queries for high-performance analytics and federated querying. · Use Flink for real-time data processing and streaming transformations. · Manage large-scale search and analytics using Elasticsearch or OpenSearch. · Collaborate with DevOps to deploy scalable solutions on containerized infrastructure, supporting Prometheus for monitoring data pipeline health. · Optimize and monitor cloud storage integrations such as Ceph for distributed data lake architectures. · Implement robust data quality, validation, and governance mechanisms to ensure reliable data pipelines. Right Candidate Profile The ideal candidate is a technically adept Data Engineer with deep experience in real-time and batch data architectures. You are comfortable managing high-throughput streaming systems, optimizing distributed queries, and working across complex data ecosystems. You take ownership of the reliability and performance of your data flows, ensuring that downstream users receive accurate, timely data. With 3–7 years of experience, you thrive in collaborative environments and contribute meaningfully to Red Orange Infotech’s mission of delivering innovation through process-driven, intelligent data systems. Must Have Skills · Expertise in SQL, especially PostgreSQL and Trino query optimization. · Strong experience with Apache Kafka, Kafka Connect, and Debezium for real-time data pipelines. · Proficiency in using Apache Iceberg for large-scale data lakehouse architectures. · Hands-on experience with OpenSearch or Elasticsearch for querying and indexing large datasets. · Experience with Apache Flink for real-time data processing and event-driven applications. · Strong command of Python for data pipeline scripting and transformation logic. · Familiarity with Prometheus for pipeline monitoring and system health metrics. · Knowledge of cloud object storage systems like Ceph and their integration into data platforms. · Deep understanding of ETL/ELT design patterns, data validation, and quality assurance. Nice to Have Skills · Experience integrating data workflows into CI/CD environments. · Familiarity with data Lakehouse architecture and versioned datasets. · Exposure to stream processing optimizations, watermarking, and late event handling. · Understanding of data security, access controls, and compliance frameworks. Qualifications and Work Mode · Bachelor’s degree in Computer Science, Engineering, or a related field. · 3–7 years of relevant experience in data engineering or related big data roles. · 5-day work-from-office role based in Gurugram, India.

Posted 3 weeks ago

Apply

7.0 - 9.6 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Scope Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product. Our Current Technical Environment Software: Python, AI Models, multi threading, GIT, Rest API, OAuth Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture Cloud Architecture: MS Azure (ARM templates, AI Assistants, AI Foundry, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD) Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, langchan, GIT What You’ll Do Understands and analyze business requirements and assists in design for accuracy and completeness. Develops and maintains relevant product. Demonstrates good understanding of the product and owns one or more modules What We Are Looking For BE/B. Tech or ME/M. Tech or MCA with 7 to 9.6 years of experience in Software Development of large and complex enterprise applications. Experience in developing enterprise application using Python. Develops and maintains relevant product and domain knowledge Develops and executes Unit Tests Follows standard processes and procedures Identifies reusable components Ensures that the code is delivered for Integration Build and Test which includes the release content Identifies and resolves software bugs Tests and integrates with other development tasks Adheres to the performance benchmark based on pre-defined requirements Possesses knowledge of database architecture and data models used in the relevant product. Plans and prioritizes work Proactively reports all activities to the reporting managers Proactively seeks assistance as required Provide assistance or guidance to the new members of the team Demonstrates problem solving and innovation ability. Participates in company technical evaluations and coding challenges. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 3 weeks ago

Apply

6.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Mitsogo Mitsogo - Java Lead Job Description Mitsogo is a global organization that highly values the contributions of each employee. Our ability to attract top talent is a testament to our commitment to fostering a sense of belonging for everyone. We recognize the rapid evolution of technology and society that impacts our industry, and we prioritize equipping our employees with diverse opportunities and empowering them with a wide range of skills. About Hexnode Hexnode, the Enterprise software division of Mitsogo Inc., was founded with a mission to simplify the way people work. Operating in over 100 countries, Hexnode UEM empowers organizations in diverse sectors. Fueling the transformation to a seamless ecosystem of connected tools, Hexnode is revolutionizing the enterprise software and cybersecurity landscape. Responsibilities Designing and implementing scalable and robust systems. Collaborate with cross-functional teams to drive product outcomes. Mentor engineers and foster a performance-first culture across teams. Defines the right processes for the team’s maturity level, balancing agility and discipline Requirements Bachelor’s or Master’s degree in Engineering, or related field. 6 to 9 years of experience in software/performance engineering. Strong knowledge of distributed systems and cloud-native architectures. Experience in AWS would be a plus. Proficiency in scripting and automation Deep understanding of SQL/NoSQL databases like Postgres, Big Table/Dynamo/Cassandra, Redis. Experience in Elasticsearch (or any other Lucene based search engines) would be a plus. Experience in big data systems like Kafka, Spark, Flink would be preferable. Strong communication and documentation skills in addition to experience mentoring and leading technical teams Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We have opening for Java Developer with our client in Pune Requirements 5+ years of Java development within an enterprise-level domain Java 8 (11 preferred) features like lambda expressions, Stream API, CompletableFuture, etc. Skilled with low-latency, high volume application development Team will need expertise in CI/CD, and shift left testing Nice to have Golang and/or Rust Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot Proficiency with SQL Experience with data sourcing, data modeling and data enrichment Experience with Systems Design & CI/CD pipelines Cloud computing, preferably AWS Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability) MongoDB Sonar Jenkins Oracle DB, Sybase IQ, DB2 Drools or any rules engine experience CMS tools like Adobe AEM Search tools like Algolia, ElasticSearch or Solr Spark Mandatory Core Java, SOLID Principles, Multithreading, Design patterns Spring, Spring Boot, Rest API, Microservices Kafka, Messaging/ streaming stack Junit Code Optimization, Performance Design, Architecture concepts Database and SQL CI/CD-Understanding of Deployment, Infrastructure, Cloud Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Senior Software Development Engineer is an advanced subject matter expert, accountable for designing, developing, and testing software systems, modules, or applications for software enhancements and new products including cloud-based or internet-related tools. This role takes accountability for detailed design for certain modules/sub-systems, doing prototype for multi-vendor infrastructure, and showcasing it internally or externally to clients. This role designs and develops functionality in a micro-services environment working with APIs, telemetry data, and running ML/AI algorithms on it, working with both structured and unstructured data. What You'll Be Doing Key Responsibilities: Designs and develops solutions and functionality that drives the growth of business. Responsible for writing and testing code. Accountable for execution of automated testing. Contributes to software deployment. Works across multiple teams to deliver software components while working in collaboration with the product team. Designs and integrates solutions through automation and coding, using 3rd party software. Creates, crafts and debugs large scale distributed systems. Contributes to writing, updating and maintaining the technical program, end-user documentation, and operational procedures. Responsible for refactoring code. Works across multiple teams to review code written by other developers. Performs any other tasks as required. Knowledge and Attributes: Excellent understanding of cloud architecture and services in multiple public clouds like AWS, GCP, Microsoft Azure, and Microsoft Office 365. Subject matter expert in programming languages such as C/C++, C#, Java, JavaScript, Python, Node.js, libraries and frameworks. Advanced expertise of data structures, algorithms, and software design with strong analytical and debugging skills. Advanced knowledge of micro services-based software architecture and experience with API product development. Advanced expertise in SQL and no-SQL data stores including Elasticsearch, MongoDB, Cassandra. Advanced understanding of container run time (Kubernetes, Docker, LXC/LXD). Advanced proficiency with agile, lean practices and believes in test-driven development. Possess a can-do attitude and one that takes initiative. Excellent ability to work well in a diverse team with different backgrounds and experience levels. Excellent ability to thrive in a dynamic, fast-paced environment. Advanced proficiency with CI/CD concepts and tools. Advanced proficiency with cloud-based infrastructure and deployments. Excellent attention to detail. Academic Qualifications and Certifications: Bachelor's degree or equivalent in Computer Science, Engineering or a related field. Microsoft Certified Azure Fundamentals preferred. Relevant agile certifications preferred. Required Experience: Advanced demonstrated experience working with geo-distributed teams through innovation, bootstrapping, pilot, and production phases with multiple stakeholders to the highest levels of quality and performance. Advanced demonstrated experience with tools across full software delivery lifecycle, for example. IDE, source control, CI, test, mocking, work tracking, defect management. Advanced demonstrated experience in Agile and Lean methodologies, Continuous Delivery / DevOps, Analytics / data-driven processes. Advanced proficiency in working with large data sets and ability to apply proper ML/AI algorithms. Advanced demonstrated experience in developing micro-services and RESTful APIs. Advanced demonstrated experience in software development. Workplace type: About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Senior Software Development Engineer is an advanced subject matter expert, accountable for designing, developing, and testing software systems, modules, or applications for software enhancements and new products including cloud-based or internet-related tools. This role takes accountability for detailed design for certain modules/sub-systems, doing prototype for multi-vendor infrastructure, and showcasing it internally or externally to clients. This role designs and develops functionality in a micro-services environment working with APIs, telemetry data, and running ML/AI algorithms on it, working with both structured and unstructured data. What You'll Be Doing Key Responsibilities: Designs and develops solutions and functionality that drives the growth of business. Responsible for writing and testing code. Accountable for execution of automated testing. Contributes to software deployment. Works across multiple teams to deliver software components while working in collaboration with the product team. Designs and integrates solutions through automation and coding, using 3rd party software. Creates, crafts and debugs large scale distributed systems. Contributes to writing, updating and maintaining the technical program, end-user documentation, and operational procedures. Responsible for refactoring code. Works across multiple teams to review code written by other developers. Performs any other tasks as required. Knowledge and Attributes: Excellent understanding of cloud architecture and services in multiple public clouds like AWS, GCP, Microsoft Azure, and Microsoft Office 365. Subject matter expert in programming languages such as C/C++, C#, Java, JavaScript, Python, Node.js, libraries and frameworks. Advanced expertise of data structures, algorithms, and software design with strong analytical and debugging skills. Advanced knowledge of micro services-based software architecture and experience with API product development. Advanced expertise in SQL and no-SQL data stores including Elasticsearch, MongoDB, Cassandra. Advanced understanding of container run time (Kubernetes, Docker, LXC/LXD). Advanced proficiency with agile, lean practices and believes in test-driven development. Possess a can-do attitude and one that takes initiative. Excellent ability to work well in a diverse team with different backgrounds and experience levels. Excellent ability to thrive in a dynamic, fast-paced environment. Advanced proficiency with CI/CD concepts and tools. Advanced proficiency with cloud-based infrastructure and deployments. Excellent attention to detail. Academic Qualifications and Certifications: Bachelor's degree or equivalent in Computer Science, Engineering or a related field. Microsoft Certified Azure Fundamentals preferred. Relevant agile certifications preferred. Required Experience: Advanced demonstrated experience working with geo-distributed teams through innovation, bootstrapping, pilot, and production phases with multiple stakeholders to the highest levels of quality and performance. Advanced demonstrated experience with tools across full software delivery lifecycle, for example. IDE, source control, CI, test, mocking, work tracking, defect management. Advanced demonstrated experience in Agile and Lean methodologies, Continuous Delivery / DevOps, Analytics / data-driven processes. Advanced proficiency in working with large data sets and ability to apply proper ML/AI algorithms. Advanced demonstrated experience in developing micro-services and RESTful APIs. Advanced demonstrated experience in software development. Workplace type: About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Description Job Title: Pipeline Engineers Hiring Location: Any UST Offices Mandatory Skills Hands-on Development in Java/Scala: 2+ years of experience building scalable, robust microservices using Java or Scala. Microservices with Spring Boot: Proficiency in developing microservices using Spring Boot framework. Functional Programming: Knowledge of functional programming concepts and their application in software development. CI/CD Workflows: Experience with continuous integration (CI) and continuous delivery (CD) workflows and concepts. Big Data Technologies: Familiarity with big data pipeline technologies, including Kafka, Spark, Zeppelin, Hadoop, and AWS EMR. Distributed Data Stores: Experience with distributed data storage solutions such as S3, Cassandra, Mongo, Elasticsearch, Couchbase, and Redis. Data Serialization Formats: Familiarity with modern data serialization formats like Protocol Buffers, Avro, and Thrift. Containerization and Orchestration: Working knowledge of containerization technologies such as Docker and orchestration tools like Kubernetes and Helm. AWS Ecosystem: Familiarity with AWS services and cloud infrastructure. Nice To Have Skills Java Development: Strong Java development skills. CI/CD: Experience with CI/CD pipelines and automation tools. AWS: Deep understanding and experience working with AWS services. Kafka: Experience working with Kafka for building event-driven systems. Docker and Kubernetes: Proficiency in using Docker for containerization and Kubernetes for orchestration. Good To Have Skills Helm: Experience with Helm for Kubernetes package management. ETL: Understanding of ETL processes for data extraction, transformation, and loading. Experience Range 2+ years of hands-on experience in building scalable microservices and working with cloud-native and distributed systems. Skills Java,Kong,Kafka,Kubernetes Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Overview: Guidepoint’s Qsight group is a new, high-growth division focused on building market-leading data intelligence solutions for the healthcare industry. Operating like a start-up within a larger high-growth company, Qsight works with proprietary data to generate actionable insights for the world’s leading institutional investors and medical device and pharmaceutical companies. The Qsight team is passionate about creating market intelligence products through rigorous analysis of alternative data to deliver highly relevant, accurate insights to a global client base. Location: Pune - Hybrid. About The Role We are looking for a Backend Developer Intern to join our team. What You’ll Do: As an intern, you will have the opportunity to learn and grow in the following areas: AWS Cloud Services What you'll learn: Develop an understanding of cloud services by assisting in the creation and maintenance of AWS Lambda functions, using AWS CloudWatch for monitoring, and learning about AWS CloudFormation for deploying infrastructure as code. Why it’s cool: Gain hands-on experience with industry-standard cloud technologies that are transforming how businesses build scalable applications. Backend Development (Node.js) What you'll learn: Write and maintain backend services in Node.js, develop APIs, and integrate them with AWS and other cloud services. Learn how to test and debug these APIs to ensure they work seamlessly. Why it’s cool: Get real-world exposure to full-stack development and learn best practices for backend development with JavaScript, one of the most popular programming languages in the world. Database Management (DynamoDB & Geospatial Data) What you'll learn: Work with DynamoDB, a NoSQL database service, and understand its key-value data model. Explore how geospatial data is handled and learn how to design solutions for geospatial queries. Why it’s cool: You'll be introduced to cutting-edge database management practices and get to work with geospatial data, which is highly relevant in modern applications like mapping, tracking, and location-based services. Full-Text Search with OpenSearch What you'll learn: Learn how to integrate and optimize OpenSearch for full-text search capabilities, including indexing, querying, and managing large datasets. Why it’s cool: Understand how powerful search engines work and the importance of optimizing search queries to enhance performance in large-scale applications. API Development & Integration What you'll learn: Gain experience building and maintaining APIs that are scalable and efficient, while following best practices for asynchronous API design. Why it’s cool: You'll dive into real-world API development and testing using tools like Postman, which is an essential skill for any modern developer. Development Tools & Environment What you'll learn: Become proficient with industry-standard tools like Git for version control, Docker for containerization, and VSCode for code editing. Why it’s cool: You’ll learn how to work in a professional development environment and master tools that are crucial for software development. Collaboration What you'll learn: Work closely with experienced developers in a collaborative setting, participate in brainstorming sessions, and contribute to code reviews. Why it’s cool: Learn how a professional development team functions, improve your collaboration and communication skills, and get real-time feedback from senior developers. What You’ll Learn: Hands-on Experience with AWS Cloud Technologies – Work with services like AWS Lambda, CloudWatch, and CloudFormation. Backend Development Knowledge – Gain real-world experience developing, testing, and deploying backend applications with Node.js. Cloud Database & Geospatial Data – Learn to work with DynamoDB and gain insight into how geospatial data is managed in the cloud. Search Engine Technology – Explore full-text search with OpenSearch, and learn how to optimize large datasets for efficient search operations. API Design and Testing – Get practical knowledge of designing, building, and maintaining APIs. Industry Tools & Best Practices – Master tools like Git, Docker, and Postman that are widely used in the industry. Team Collaboration – Work in a supportive environment, attend code reviews, and improve your skills through team collaboration. Required Technical Qualifications: A Strong Desire to Learn – You don’t need to be an expert, but you should have a genuine interest in cloud technologies, backend development, and modern tools. Basic Knowledge of JavaScript – Familiarity with JavaScript or Node.js will be helpful but not required. Interest in Cloud Platforms (AWS) – An eagerness to learn about AWS services like Lambda, DynamoDB, and CloudWatch. Curiosity About APIs and Databases – A desire to dive into API development and gain knowledge about NoSQL databases and full-text search. Problem-Solving Skills – You enjoy troubleshooting and finding solutions to technical challenges. Communication Skills – You should be able to clearly express ideas and ask for help when needed. Team Player – You’ll be collaborating with others, so being able to work well in a team is key. Preferred Qualifications (Not Required): Experience with OpenSearch, Elasticsearch, or other search engines. Familiarity with CI/CD pipelines. Exposure to Docker or other containerization technologies. Knowledge of microservices architecture. What We Offer: Competitive compensation Employee medical coverage Central office location Entrepreneurial environment, autonomy, and fast decisions Casual work environment About Guidepoint: Guidepoint is a leading research enablement platform designed to advance understanding and empower our clients’ decision-making process. Powered by innovative technology, real-time data, and hard-to-source expertise, we help our clients to turn answers into action. Backed by a network of nearly 1.5 million experts and Guidepoint’s 1,300 employees worldwide, we inform leading organizations’ research by delivering on-demand intelligence and research on request. With Guidepoint, companies and investors can better navigate the abundance of information available today, making it both more useful and more powerful. At Guidepoint, our success relies on the diversity of our employees, advisors, and client base, which allows us to create connections that offer a wealth of perspectives. We are committed to upholding policies that contribute to an equitable and welcoming environment for our community, regardless of background, identity, or experience. Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Us We are the independent expert in assurance and risk management. Driven by our purpose, to safeguard life, property, and the environment, we empower our customers and their stakeholders with facts and reliable insights so that critical decisions can be made with confidence. As a trusted voice for many of the world’s most successful organizations, we use our knowledge to advance safety and performance, set industry benchmarks, and inspire and invent solutions to tackle global transformations. About The Role Development and maintenance of CI/CD pipelines for the automation of build, test and deployment processes (e.g. with Jenkins, GitLab CI/CD, GitHub Actions). Management and optimisation of cloud infrastructures (Azure) with Infrastructure as Code (Terraform, Ansible, CloudFormation). Operation and scaling of container platforms (Docker, Kubernetes). Implementation of monitoring and logging solutions (Prometheus, Grafana, ELK-Stack, Datadog) for proactive system monitoring. Troubleshooting & incident management: analysing system failures and performance bottlenecks. Ensuring security & compliance best practices (secrets management, access controls, encryption, ISO 27001, GDPR). Collaboration with software developers, QA teams and security experts to ensure a stable and efficient development and operating environment. Mentoring and supporting junior and mid-level DevOps Engineers. What You Bring With You Several years of experience in software development, system administration or as a DevOps engineer. Sound knowledge of cloud technologies (Azure) and container orchestration (Kubernetes, Docker). Experience with CI/CD and automation tools (e.g. Terraform, Ansible, Helm, ArgoCD). Knowledge of scripting/programming (Python, Bash, Go or comparable languages). Good knowledge of monitoring, logging and security best practices. Experience in performance optimisation & cost management in cloud environments. Ability to work in a team & strong communication skills, as well as a solution-orientated way of working. What we offer Flexible work arrangements for better work-life balance . Generous Paid Leaves (Annual, Sick, Compassionate, Local Public, Marriage, Maternity, Paternity, Medical leave). Medical benefits ( Insurance and Annual Health Check-up). Pension and Insurance Policies (Group Term Life Insurance, Group Personal Accident Insurance, Travel Insurance). Training and Development Assistance (Training Sponsorship, On-The-Job Training, Training Programme) . Additional Benefits (Long Service Awards, Mobile Phone Reimbursement). Company bonus/Profit share. Competitive remuneration. Hybrid workplace model. A culture of continuous learning to aid progression. Personal Growth opportunity using our 70-20-10 philosophy: 70% learning on the job, 20% coaching and 10% training. *Benefits may vary based on position, tenure/contract/grade level* Equal Opportunity Statement DNV is an Equal Opportunity Employer and gives consideration for employment to qualified applicants without regard to gender, religion, race, national or ethnic origin, cultural background, social group, disability, sexual orientation, gender identity, marital status, age or political opinion. Diversity is fundamental to our culture, and we invite you to be part of this diversity! About You As a IT DevOps Engineer, you will be responsible for ensuring the smooth functioning of our IT infrastructure. Your Main Tasks Include Management and scaling of the IT infrastructure, including cloud services such as Azure and local server infrastructures. Automation of processes through the development and implementation of scripts and tools to simplify deployments, configurations and monitoring. Establishment and maintenance of CI/CD pipelines to enable the continuous integration and deployment of applications. Monitoring systems and applications, diagnosing and resolving performance issues, security vulnerabilities or failures. Liaise with software development teams to ensure the efficient functioning of the development environment and assist in resolving infrastructure issues. Creating and updating documentation on system configurations, processes and problem solutions. Planning and scaling of resources to meet current and future requirements. Management of incidents in the event of system failures or other operational disruptions, including coordination of response teams and communication with the affected parties. Qualifications & Experience Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Above 6 years of relevant experience in below areas: Operating System Knowledge Experience with Linux and/or Windows operating systems. Ability to configure and manage operating system resources. Scripting And Automation Strong knowledge of scripting languages such as Bash, PowerShell. Optional: nodeJS, Python. Optional: Experience with automation tools like Ansible, Puppet, or Chef. Container Technologies Familiarity with container orchestration tools like Kubernetes. Knowledge of using Docker for application containerization. Cloud Platforms: Experience with cloud services on Azure. Ability to deploy and manage resources in the cloud. Version Control Knowledge of working with version control systems like Git. Continuous Integration and Continuous Deployment (CI/CD): Experience with Azure XML Pipelines. Optional:CI/CD pipelines and tools like Jenkins, Travis CI, or GitLab CI. Monitoring And Logging Knowledge of implementing monitoring solutions like Prometheus or Grafana. Experience with logging tools like ELK (Elasticsearch, Logstash, Kibana) Stack. Network Knowledge Basic network knowledge for configuring and managing network infrastructures. Security Understanding of security practices and measures in the DevOps context. Knowledge of implementing security policies and procedures. Team Collaboration Ability to effectively collaborate with development teams (Dev) and IT operations teams (Ops). Communication skills for working in an agile environment. Troubleshooting And Issue Resolution Ability to quickly identify and resolve issues in the infrastructure. Understanding Of Infrastructure As Code (IaC) Experience with tools like Terraform or CloudFormation for automated infrastructure management. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

We’re Hiring: Full Stack Developer (Backend-Focused) – Microservices | Cloud | Kotlin/Java | AWS Join our growing team as a Full Stack Developer , where you’ll build scalable, cloud-native solutions for high-impact projects. What We’re Looking For: Strong background in Microservices Architecture and Cloud platforms (preferably AWS ) Backend expertise in Kotlin/Java (Spring) or Ruby Hands-on experience with databases like PostgreSQL, ScyllaDB/Cassandra, MongoDB, Redis, Elasticsearch Solid understanding of DevOps practices : CI/CD, observability ( Dynatrace, Grafana, OpenTelemetry ), Docker/Kubernetes Passion for automated testing , writing clean , secure code Some exposure to frontend technologies – React , TypeScript/JavaScript Bonus : Experience with Twilio APIs Show more Show less

Posted 3 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

Remote

Indeed logo

Only Immediate Joiners, Serving Notice Period or 1 Month Notice Period Candidates are Considered for this role . We are Urgently looking for a Senior Java Engineer: As a Senior Java Engineer, you will work with lead-level and fellow senior-level engineers to architect and implement solutions that enable customers to get the most out of what the client can offer. In this role, you will develop performant and robust Java applications while supplying the continued evaluation and advancement of web technologies in the organization. Responsibilities : Work on a high-velocity scrum team Work with clients to come up with solutions to real-world problems Architect and implement scalable end-to-end Web applications Help team lead facilitate development processes Provide estimates and milestones for features/stories Work with your mentor to learn and grow and mentor less experienced engineers Contribute to the growth of InRhythm via interviewing and architecting What you bring to the table (Core Requirements): 5+ years of Java development within an enterprise-level domain Java 8 (11 preferred) features like lambda expressions, Stream API, CompletableFuture, etc. Skilled with low-latency, high volume application development Team will need expertise in CI/CD, and shift left testing Nice to have Golang and/or Rust Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot Proficiency with SQL Experience with data sourcing, data modeling and data enrichment Experience with Systems Design & CI/CD pipelines Cloud computing, preferably AWS Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability) MongoDB Sonar Jenkins Oracle DB, Sybase IQ, DB2 Drools or any rules engine experience CMS tools like Adobe AEM Search tools like Algolia, ElasticSearch or Solr Spark What makes you stand out from the pack: Payments or Asset/Wealth Management experience Mature server development and knowledge of frameworks, preferably Spring Enterprise experience working and building enterprise products, long term tenure at enterprise-level organizations, experience working with a remote team, and being an avid practitioner in their craft You have pushed code into production and have deployed multiple products to market, but are missing the visibility of a small team within a large enterprise technology environment. You enjoy coaching junior engineers, but want to remain hands-on with code. Open to work hybrid - 3 days per week from office Mandatory : Core Java, SOLID Principles, Multithreading, Design patterns Spring, Spring Boot, Rest API, Microservices Kafka, Messaging/ streaming stack Junit Code Optimization, Performance Design, Architecture concepts Database and SQL CI/CD-Understanding of Deployment, Infrastructure, Cloud Good to have: Network Stack - GRPC, HTTP/2 etcSecurity Stack (OWASP, Oauth,encryption)Good CommunicationAgile Additional Information : Shift Timing : 9-5 general shiftInterview Rounds Virtual (4 Rounds): 2 Internal + 2 Client.Mode of Work: Hybrid - 3 days a weekOffice Location : Yerwada, PuneNP: Immediate to 30 days Job Type: Full-time Pay: Up to ₹2,800,000.00 per year Location Type: In-person Schedule: Day shift Application Question(s): If not in Pune,Which is your Current location ?Are you willing to Relocate to Pune ? What is your Current CTC ? What is your expected CTC ? What is your Notice Period ? Experience: total work: 5 years (Required) Java developer: 5 years (Required) Kafka: 5 years (Required) Multithreading : 5 years (Required) Microservices: 5 years (Required) Spring Boot: 5 years (Required) Location: Pune, Maharashtra (Preferred) Work Location: In person

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. Our team is a dynamic group of professionals who are passionate about technology and innovation. We thrive in a collaborative, inclusive environment where every voice is valued. Together, we strive to deliver world-class solutions and outstanding service to our clients, ensuring their travel experiences are seamless and enjoyable. We’re seeking a DevOps Engineer to join our team and work on a dynamic suite of platforms that service enterprise products and platforms. We are looking for someone who can adapt to recent technologies, thrive in a dynamic environment, and deliver positive outcomes for the company and our clients. If you are as passionate about technology as we are, we want to hear from you! What You’ll Do On a Typical Day Work in a SCRUM team Design, develop and test new CI/CD or DevOps pipelines for application teams Performing administration and operations of overall Red Hat OpenShift solutions for customers, including development of solution designs, implementation plans and documentation Onboard applications to enterprise DevOps and container platforms Design and Develop IT automation & monitoring solutions for business applications. Assesses the provided automation architecture and or proof of concept and selects best methods for implementation of said architecture and creates appropriate automated tests to validate functionality. Evaluate and implement orchestration, automation, and tooling solutions to ensure consistent processes and repetitive tasks are performed with the highest level of accuracy and reduced defects Analyze platform usage metrics to determine if platform needs expansion and plan proactively engineering tasks as needed. Analyze and resolve technical and application / platform problems Collaborate and work alongside fellow engineers, designers, and other partners to develop and maintain applications / platforms Participate in the evolution and maintenance of existing systems Propose new functional and/or technical product improvements Experiment with new and emerging technologies, tools and platforms. What We’re Looking For 5+ years of experience in DevOps and Container platform engineering Bachelor's or master's degree in computer science or STEM 3 or more years experience and good knowledge on Jenkins, GitHub Actions with demonstrated skills in creating CI/CD pipelines. 3 or more years experience with Docker containerization and clustering (Kubernetes, Docker, Helm, Open Shift, EKS experience). Knowledge on YML with ability to create Docker Files for different environments and resources. Administering source code (GitHub/GitLab, etc.) & artifact/packages/images management (Nexus/JFrog, etc.) tools. Having knowledge on security scanning & DevSecOps SAST, DAST, SCA tools (Snyk, Sonatype, GitLab, Mend & etc.) Hands-on experience in provisioning Infrastructure as Code (IaC). Experience with Linux, Automation, scripting (ansible, bash, groovy). Interest in learning and mastering new technologies. Passion for excellence in platform engineering, DevSecOps, and building enterprise platforms. Curiosity and passion for problem solving Proficient in English Inclusive, collaborative, and able to work seamlessly with a multicultural and international team Bonus if you have Experience in AWS Knowledge of accessibility (WCAG) Knowledge of the travel industry Technical Skills You’ll Develop Jenkins, GitHub, GitHub Actions, Nexus/JFrog, Sonarqube, Veracode Kubernetes, Red Hat Openshift, EKS, Docker, Podman Ansible, bash, groovy DevSecOps - Snyk, Sonatype, Wiz etc. Terraform, Cloud-formation, Chef, Puppet, Python New Relic, ELK (Elasticsearch, Logstash, Kibana), Amplitude Analytics Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for! Show more Show less

Posted 3 weeks ago

Apply

1.0 - 2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Why NxtWave As a Software Development Engineer at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Job Responsibilities Develop REST & GraphQL APIs required for the applications Designing the database schema for new features to develop highly scalable & optimal applications Work closely with the product team & translate the PRDs & User Stories to a right solution Writing highly quality code following the clean code guidelines, design principles & clean architecture with maximum test coverage Take ownership of features you are developing & drive towards completion Do peer code reviews & constantly improve code quality Skills Required 1-2 years of experience in backend application development Strong expertise in Python or Java, MySQL, REST API Design Good understanding of Frameworks like Django or Flask or Spring boot & ability to work with ORMs Expertise on indexes in MySQL and writing optimal queries Comfortable with Git Good problem solving skills Write unit and integration tests with high code coverage Have good understanding of NoSQL databases like DynamoDB, ElasticSearch (Good to Have) Having a good understanding of AWS services is beneficial. Qualities we'd love to find in you The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination for completion with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location Hyderabad Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Role: Senior Java Engineer Location: Pune, India (Hybrid - 3 days a week in office) Experience: 8 - 12 Years Shift Timing: 9 AM - 5 PM general shift. Interview Rounds: Virtual (4 Rounds): 2 Internal + 2 Client. Mode of Work: Hybrid - 3 days a week in office. Office Location: Yerwada, Pune. Job Positions: 2. About Us We’re proud to be one of New York City’s fastest-growing product engineering consulting firms, dedicated to driving innovation and scalable growth for our clients. With eight consecutive years on the Inc. 5000 list of America’s Fastest-Growing Companies, we’ve earned a place in the elite Inc. 5000 Hall of Fame — an honor reserved for the top 1% of high-growth companies nationwide . What We Do We specialize in rapidly bringing our clients' most critical and strategic products to market — with high velocity, exceptional quality, and 10x impact. By embedding modern tools, proven methodologies, and forward-thinking leadership, we help build innovative, high-performing teams that thrive in today’s fast-paced digital landscape. This is a unique opportunity to join a dynamic and evolving team. Our client roster includes industry leaders such as Goldman Sachs, Fidelity, Morgan Stanley, and Mastercard. From greenfield innovations to tier-one product builds, our teams lead the delivery of mission-critical projects across product strategy, design, cloud-native applications, and both mobile and web development. The work we do shapes industries — and transforms the way people live, work, and think. About the Role: Senior Java Engineer As a Senior Java Engineer, you will collaborate with lead-level and fellow senior-level engineers to architect and implement solutions that maximize client offerings. In this role, you will develop performant and robust Java applications while continuously evaluating and advancing web technologies within the organization. Responsibilities:- Work on a high-velocity scrum team. Collaborate with clients to devise solutions for real-world problems. Architect and implement scalable end-to-end Web applications. Support the team lead in facilitating development processes. Provide estimates and milestones for features/stories. Work with your mentor for personal learning and growth, and mentor less experienced engineers. Contribute to the growth of it through interviewing and architectural contributions. Qualifications (Core Requirements) 5+ years of Java development within an enterprise-level domain. Proficiency with Java 8 (Java 11 preferred) features such as lambda expressions, Stream API, Completable Future, etc. Skilled in low-latency, high-volume application development. Expertise in CI/CD and shift-left testing. Nice to have: Golang and/or Rust. Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot. Proficiency with SQL. Experience with data sourcing, data modeling, and data enrichment. Experience with Systems Design & CI/CD pipelines. Cloud computing, preferably AWS. Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability). MongoDB. Sonar. Jenkins. Oracle DB, Sybase IQ, DB2. Drools or any rules engine experience. CMS tools like Adobe AEM. Search tools like Algolia, ElasticSearch, or Solr. Spark. What Makes You Stand Out From The Pack Payments or Asset/Wealth Management experience. Mature server development and knowledge of frameworks, preferably Spring. Enterprise experience working and building enterprise products, long-term tenure at enterprise-level organizations, experience working with a remote team, and being an avid practitioner in their craft. You have pushed code into production and have deployed multiple products to market, but are seeking the visibility of a small team within a large enterprise technology environment. You enjoy coaching junior engineers, but want to remain hands-on with code. Open to hybrid work - 3 days per week from the office. Must-Haves Mandatory: Core Java, SOLID Principles, Multithreading, Design patterns. Spring, Spring Boot, Rest API, Microservices. Kafka, Messaging/streaming stack. JUnit. Code Optimization, Performance Design, Architecture concepts. Database and SQL. CI/CD - Understanding of Deployment, Infrastructure, Cloud. No gaps in organization. No job hoppers (candidate must have good stability). Joining time/notice period: Immediate to 30 days. Nice To Haves Good to have: Network Stack - gRPC, HTTP/2 etc. Security Stack (OWASP, OAuth, encryption). Good Communication. Agile. Skills: sql,systems design,db2,data modeling,spark,spring boot,spring,jenkins,cms tools,ci/cd,solid principles,code optimization,data enrichment,data sourcing,spring, spring boot, rest api, microservices.,rust,solr,low-latency application development,golang,java,rest api,design patterns,sonar,elasticsearch,multithreading,kafka, messaging/streaming stack.,asynchronous programming,algolia,mongodb,drools,cloud computing,core java,kafka,oracle db,performance design,aws,sybase iq,core java, solid principles, multithreading, design patterns,database,architecture concepts,search tools,junit,adobe aem,high-volume application development,microservices Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Linkedin logo

Job Role- AWS+Python(L3 Support) Experience-5+ years Location-Nagpur Job Details- 7+ years of microservices development experience in two of these: Python, Java, Scala 5+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 5+ years of experience with Big Data Technologies: Apache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational Databases: Postgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 3+ years of experience working with data consumption patterns 2+ years of experience with search and analytics platforms: OpenSearch or ElasticSearch 2+ years of experience in Cloud technologies: AWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Exposure to data-warehousing products: Snowflake or Redshift Exposure to Relation Data Modelling, Dimensional Data Modeling & NoSQL Data Modelling concepts Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

Bengaluru

Remote

Naukri logo

Job Title: Full Stack Engineer (Web Applications) Duration: Full time role Location: Remote (Bengaluru) Note: Need someone who can join immediately OR in less than 30 days of notice period. Note: Need someone who can join immediately OR in less than 30 days of notice period. Note: Need someone who can join immediately OR in less than 30 days of notice period. Job Description: Duties: Participate in development life cycle activities like design, coding, testing and release for both internal tools and customer-faced products. Develop full-featured web application, scale back-end services, web services, restful APIs, and micro services etc. Build reusable code and libraries, with performance and security in mind. Work closely with team members and PMs to gather requirements, design, implement and release. Proven problem-solving and interpersonal communication skills Required Skills: Solid understanding of the full web development life cycle. Experience with frameworks such Django, Laravel, jQuery/AngularJS etc. 4+ years of full-stack web development experience with knowledge of both backend (Linux, Databases, Application servers) and frontend (HTML, CSS, JavaScript). Ability to deliver production code in diverse languages, such as Python, PHP, and JavaScript. 2+ years of experience with relational databases and SQL. Experience with NoSQL such as Elasticsearch or MongoDB will be a plus. Proficient in using git or equivalent version control tool. Experience with Docker will be a plus. Education: Bachelors Degree in Computer Science or related field (or equivalent)

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title : Senior Monitoring Engineer LogicMonitor & Grafana Experience : 4+ Years Joining : Immediate Joiner Preferred Job Summary We are seeking a skilled Monitoring Engineer with 4+ years of experience to join our team immediately. The ideal candidate will have hands-on experience in LogicMonitor implementation and support, along with Grafana setup and customization. You will play a key role in deploying, configuring, and maintaining monitoring solutions that ensure system reliability, performance, and availability. Key Responsibilities Design, implement, and support LogicMonitor monitoring solutions across infrastructure and applications. Perform LogicMonitor onboarding, dashboard creation, alert configuration, and integration with ITSM tools. Configure and customize Grafana dashboards using various data sources (e.g., Prometheus, InfluxDB, Elasticsearch). Work with DevOps, Infrastructure, and Application teams to gather monitoring requirements and deliver effective solutions. Maintain and optimize existing monitoring configurations to ensure performance and scalability. Troubleshoot monitoring issues and provide timely support. Document implementation procedures, configuration guides, and support documentation. Required Skills 4+ years of experience in infrastructure/application monitoring. Proven hands-on experience with LogicMonitor implementation and ongoing support. Proficient in Grafana setup, dashboard creation, and integration with multiple data sources. Strong knowledge of system performance metrics, alerts, thresholds, and SLAs. Experience with scripting (Python, PowerShell, or Bash) for automation tasks is a plus. Familiarity with cloud platforms (AWS, Azure, GCP) monitoring is a plus. Soft Skills Excellent communication and problem-solving skills. Ability to work independently and in a team environment. Strong attention to detail and documentation practices. Employment Type : Full-time Notice Period : Immediate joiners only (ref:hirist.tech) Show more Show less

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Arcadis Development teams within our Intelligence division deliver complex solutions and push the limits of technology solutions. Our talented groups of systems professionals do more than just write code and debug – they make a significant impact on the design and development of state-of-the-art projects. We are looking for a DevOps Engineer to join our growing and dynamic product team. Responsibilities: Proficiency in working with a variety of operating systems including Windows, Linux, Hyper-V, VMWare, and Unix. Configuration and maintenance of secure infrastructure, including address translations, port security for firewalls, and ensuring compliance with security standards. Installation, configuration, management, and maintenance of network devices and appliances. Collaboration on defining security requirements and conducting tests to identify weaknesses. Creating disaster recovery plans and monitoring network backups. Building, securing, and maintaining on-premises and cloud infrastructures. Ensuring availability, performance, security, and scalability of production systems. Troubleshooting system issues causing downtime or performance degradation with expertise in Agile software development methodologies. Implementing CI/CD pipelines, automating configuration management, and using Ansible playbooks. Enforcing DevOps practices in collaboration with software developers. Automating alerts for system availability and performance monitoring. Enhancing development and release processes through automation. Prototyping solutions, evaluating new tools, and engaging in incident handling and root cause analysis. Leading the automation effort and maintaining servers to the latest security standards. Understanding source code security vulnerabilities and maintaining infrastructure code bases using Puppet. Building deployment and testing pipelines using Jenkins, CloudFormation, and Puppet for automated provisioning of cloud infrastructure. Supporting and improving Docker-based development practices. Contributing to maturing DevOps culture, showcasing a methodical approach to problem-solving, and following agile practices. Qualifications 1+ year of hands-on experience in DevOps in Linux based systems Familiarity and/or experience with network technologies (Cisco, Juniper, HPE, etc.), Cloud technologies (VMware, Openstack, Azure, GCP, AWS), CI-CD tools (Jenkins, Ansible, Github, etc.), Linux user administration. Good understanding of code versioning tools like Git, strong knowledge of containerization using Docker. Developing and managing infrastructure as code using Ansible and docker-compose, experience in implementing CI/CD pipelines using Jenkins. Knowledge of AWS services like EC2, EBS, VPC (Virtual Private Cloud), ELB, SES, Elastic IP and Route53 Knowledge of setting up HTTPS certificates and SSL tunnels Experience in setup of application and infrastructure monitoring tools like Prometheus, Grafana, CAdvisor, Node Exporter, and Sentry. Experience of working with log analysis and monitoring tools in a distributed application scenario, independent analysis of problems and implementation of solutions. Experience in setting up high availability in databases like Postgres, Elasticsearch, managing on-premises infrastructure software like Xen Server. Experience of Change Management and Release Management in Agile methodology, DNS Management. Understanding of web-related terminologies and software such as web applications, webrelated protocols, service-oriented architectures, and web services. Routine security scanning for malicious software and suspicious network activity, along with protocol analysis to identify and remedy network performance issues. Experience in orchestration technologies such as Docker Swarm/Kubernetes, scripting experience with Bash Scripting and Python is desirable. Experience in managing BI Applications like Tableau, conducting load testing of applications using JMeter is desirable Additional Information Why Arcadis? We can only achieve our goals when everyone is empowered to be their best. We believe everyone's contribution matters. It’s why we are pioneering a skills-based approach, where you can harness your unique experience and expertise to carve your career path and maximize the impact we can make together. You’ll do meaningful work, and no matter what role, you’ll be helping to deliver sustainable solutions for a more prosperous planet. Make your mark, on your career, your colleagues, your clients, your life and the world around you. Together, we can create a lasting legacy. Join Arcadis. Create a Legacy. Our Commitment to Equality, Diversity, Inclusion & Belonging We want you to be able to bring your best self to work every day which is why we take equality and inclusion seriously and hold ourselves to account for our actions. Our ambition is to be an employer of choice and provide a great place to work for all our people. We believe that by working together diverse people with different experiences develop the most innovative ideas. Equality, diversity and inclusion is at the heart of how we improve quality of life and we work closely with our people across six ED&I Workstreams: Age, Disability, Faith, Gender, LGBT+ and Race. A diverse and skilled workforce is essential to our success. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

About KnowBe4 KnowBe4, the provider of the world's largest security awareness training and simulated phishing platform, is used by tens of thousands of organizations around the globe. KnowBe4 enables organizations to manage the ongoing problem of social engineering by helping them train employees to make smarter security decisions, every day. Fortune has ranked us as a best place to work for women, for millennials, and in technology for four years in a row! We have been certified as a "Great Place To Work" in 8 countries, plus we've earned numerous other prestigious awards, including Glassdoor's Best Places To Work. Our team values radical transparency, extreme ownership, and continuous professional development in a welcoming workplace that encourages all employees to be themselves. Whether working remotely or in-person, we strive to make every day fun and engaging; from team lunches to trivia competitions to local outings, there is always something exciting happening at KnowBe4. Please submit your resume in English. The individual in this role is responsible for leading software development teams to develop new and exciting products for KnowBe4’s customers, alongside other engineers in a fast-paced, agile development environment. Responsibilities Leads a software team that develops software using the KnowBe4 Software Development Lifecycle and Agile Methodologies Recommends solutions to engineering problems Translates KnowBe4's strategic goals into operational plans Provides coordination across team boundaries Requirements BS or equivalent plus 8 years technical experience MS or equivalent plus 3 years technical experience Ph.D. or equivalent plus 2 years technical experience 3 years experience managing software development teams Build, Manage and deliver high quality software product and features Ability to manage team of highly talented software engineers Should have extensive experience with building and integrating REST-based APIs with best practices of authentication & authorization in enterprise-grade production environments. Experience with building apps and microservices on the AWS platform using Python Expert knowledge in at least one of the web framework technologies like Python Django/Flask/Rails/Express. Understanding and experience in building software systems following software design principles. Demonstrable knowledge of fundamental cloud concepts around multi-tenancy, scaling out, and serverless. Working experience in writing clean, unit-tested, and secure code. Working knowledge in relational databases such as MYSQL/POSTGRES and expertise in SQL. Knowledge of no-SQL databases such as Mongo and Elasticsearch is preferred. Experience with continuous delivery and integration pipelines: Docker/Gitlab/Terraform and other Automated deployment and testing tools. Should be open to learning new technologies & programming languages as and when needed. Experience in working with APIs in the cybersecurity industry, and understanding the basics of the current security landscape (attack frameworks, security log processing, basic knowledge of AV/EDR/DLP/CASB, etc.) is a huge plus. Experience building scalable data processing pipelines is a plus. Our Fantastic Benefits We offer company-wide bonuses based on monthly sales targets, employee referral bonuses, adoption assistance, tuition reimbursement, certification reimbursement, certification completion bonuses, and a relaxed dress code - all in a modern, high-tech, and fun work environment. For more details about our benefits in each office location, please visit www.knowbe4.com/careers/benefits. Note: An applicant assessment and background check may be part of your hiring procedure. Individuals seeking employment at KnowBe4 are considered without prejudice to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation or any other characteristic protected under applicable federal, state, or local law. If you require reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please visit www.knowbe4.com/careers/request-accommodation. No recruitment agencies, please. Show more Show less

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Arcadis is the world's leading company delivering sustainable design, engineering, and consultancy solutions for natural and built assets. We are more than 36,000 people, in over 70 countries, dedicated to improving quality of life. Everyone has an important role to play. With the power of many curious minds, together we can solve the world’s most complex challenges and deliver more impact together. Job Description Arcadis Development teams within our Intelligence division deliver complex solutions and push the limits of technology solutions. Our talented groups of systems professionals do more than just write code and debug – they make a significant impact on the design and development of state-of-the-art projects. We are looking for a DevOps Engineer to join our growing and dynamic product team. Responsibilities: Proficiency in working with a variety of operating systems including Windows, Linux, Hyper-V, VMWare, and Unix. Configuration and maintenance of secure infrastructure, including address translations, port security for firewalls, and ensuring compliance with security standards. Installation, configuration, management, and maintenance of network devices and appliances. Collaboration on defining security requirements and conducting tests to identify weaknesses. Creating disaster recovery plans and monitoring network backups. Building, securing, and maintaining on-premises and cloud infrastructures. Ensuring availability, performance, security, and scalability of production systems. Troubleshooting system issues causing downtime or performance degradation with expertise in Agile software development methodologies. Implementing CI/CD pipelines, automating configuration management, and using Ansible playbooks. Enforcing DevOps practices in collaboration with software developers. Automating alerts for system availability and performance monitoring. Enhancing development and release processes through automation. Prototyping solutions, evaluating new tools, and engaging in incident handling and root cause analysis. Leading the automation effort and maintaining servers to the latest security standards. Understanding source code security vulnerabilities and maintaining infrastructure code bases using Puppet. Building deployment and testing pipelines using Jenkins, CloudFormation, and Puppet for automated provisioning of cloud infrastructure. Supporting and improving Docker-based development practices. Contributing to maturing DevOps culture, showcasing a methodical approach to problem-solving, and following agile practices. Qualifications 1+ year of hands-on experience in DevOps in Linux based systems Familiarity and/or experience with network technologies (Cisco, Juniper, HPE, etc.), Cloud technologies (VMware, Openstack, Azure, GCP, AWS), CI-CD tools (Jenkins, Ansible, Github, etc.), Linux user administration. Good understanding of code versioning tools like Git, strong knowledge of containerization using Docker. Developing and managing infrastructure as code using Ansible and docker-compose, experience in implementing CI/CD pipelines using Jenkins. Knowledge of AWS services like EC2, EBS, VPC (Virtual Private Cloud), ELB, SES, Elastic IP and Route53 Knowledge of setting up HTTPS certificates and SSL tunnels Experience in setup of application and infrastructure monitoring tools like Prometheus, Grafana, CAdvisor, Node Exporter, and Sentry. Experience of working with log analysis and monitoring tools in a distributed application scenario, independent analysis of problems and implementation of solutions. Experience in setting up high availability in databases like Postgres, Elasticsearch, managing on-premises infrastructure software like Xen Server. Experience of Change Management and Release Management in Agile methodology, DNS Management. Understanding of web-related terminologies and software such as web applications, webrelated protocols, service-oriented architectures, and web services. Routine security scanning for malicious software and suspicious network activity, along with protocol analysis to identify and remedy network performance issues. Experience in orchestration technologies such as Docker Swarm/Kubernetes, scripting experience with Bash Scripting and Python is desirable. Experience in managing BI Applications like Tableau, conducting load testing of applications using JMeter is desirable Additional Information Why Arcadis? We can only achieve our goals when everyone is empowered to be their best. We believe everyone's contribution matters. It’s why we are pioneering a skills-based approach, where you can harness your unique experience and expertise to carve your career path and maximize the impact we can make together. You’ll do meaningful work, and no matter what role, you’ll be helping to deliver sustainable solutions for a more prosperous planet. Make your mark, on your career, your colleagues, your clients, your life and the world around you. Together, we can create a lasting legacy. Join Arcadis. Create a Legacy. Our Commitment to Equality, Diversity, Inclusion & Belonging We want you to be able to bring your best self to work every day which is why we take equality and inclusion seriously and hold ourselves to account for our actions. Our ambition is to be an employer of choice and provide a great place to work for all our people. We believe that by working together diverse people with different experiences develop the most innovative ideas. Equality, diversity and inclusion is at the heart of how we improve quality of life and we work closely with our people across six ED&I Workstreams: Age, Disability, Faith, Gender, LGBT+ and Race. A diverse and skilled workforce is essential to our success. Show more Show less

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Arcadis is the world's leading company delivering sustainable design, engineering, and consultancy solutions for natural and built assets. We are more than 36,000 people, in over 70 countries, dedicated to improving quality of life. Everyone has an important role to play. With the power of many curious minds, together we can solve the world’s most complex challenges and deliver more impact together. Job Description Arcadis Development teams within our Intelligence division deliver complex solutions and push the limits of technology solutions. Our talented groups of systems professionals do more than just write code and debug – they make a significant impact on the design and development of state-of-the-art projects. We are looking for a DevOps Engineer to join our growing and dynamic product team. Responsibilities: Proficiency in working with a variety of operating systems including Windows, Linux, Hyper-V, VMWare, and Unix. Configuration and maintenance of secure infrastructure, including address translations, port security for firewalls, and ensuring compliance with security standards. Installation, configuration, management, and maintenance of network devices and appliances. Collaboration on defining security requirements and conducting tests to identify weaknesses. Creating disaster recovery plans and monitoring network backups. Building, securing, and maintaining on-premises and cloud infrastructures. Ensuring availability, performance, security, and scalability of production systems. Troubleshooting system issues causing downtime or performance degradation with expertise in Agile software development methodologies. Implementing CI/CD pipelines, automating configuration management, and using Ansible playbooks. Enforcing DevOps practices in collaboration with software developers. Automating alerts for system availability and performance monitoring. Enhancing development and release processes through automation. Prototyping solutions, evaluating new tools, and engaging in incident handling and root cause analysis. Leading the automation effort and maintaining servers to the latest security standards. Understanding source code security vulnerabilities and maintaining infrastructure code bases using Puppet. Building deployment and testing pipelines using Jenkins, CloudFormation, and Puppet for automated provisioning of cloud infrastructure. Supporting and improving Docker-based development practices. Contributing to maturing DevOps culture, showcasing a methodical approach to problem-solving, and following agile practices. Qualifications 1+ year of hands-on experience in DevOps in Linux based systems Familiarity and/or experience with network technologies (Cisco, Juniper, HPE, etc.), Cloud technologies (VMware, Openstack, Azure, GCP, AWS), CI-CD tools (Jenkins, Ansible, Github, etc.), Linux user administration. Good understanding of code versioning tools like Git, strong knowledge of containerization using Docker. Developing and managing infrastructure as code using Ansible and docker-compose, experience in implementing CI/CD pipelines using Jenkins. Knowledge of AWS services like EC2, EBS, VPC (Virtual Private Cloud), ELB, SES, Elastic IP and Route53 Knowledge of setting up HTTPS certificates and SSL tunnels Experience in setup of application and infrastructure monitoring tools like Prometheus, Grafana, CAdvisor, Node Exporter, and Sentry. Experience of working with log analysis and monitoring tools in a distributed application scenario, independent analysis of problems and implementation of solutions. Experience in setting up high availability in databases like Postgres, Elasticsearch, managing on-premises infrastructure software like Xen Server. Experience of Change Management and Release Management in Agile methodology, DNS Management. Understanding of web-related terminologies and software such as web applications, webrelated protocols, service-oriented architectures, and web services. Routine security scanning for malicious software and suspicious network activity, along with protocol analysis to identify and remedy network performance issues. Experience in orchestration technologies such as Docker Swarm/Kubernetes, scripting experience with Bash Scripting and Python is desirable. Experience in managing BI Applications like Tableau, conducting load testing of applications using JMeter is desirable Additional Information Why Arcadis? We can only achieve our goals when everyone is empowered to be their best. We believe everyone's contribution matters. It’s why we are pioneering a skills-based approach, where you can harness your unique experience and expertise to carve your career path and maximize the impact we can make together. You’ll do meaningful work, and no matter what role, you’ll be helping to deliver sustainable solutions for a more prosperous planet. Make your mark, on your career, your colleagues, your clients, your life and the world around you. Together, we can create a lasting legacy. Join Arcadis. Create a Legacy. Our Commitment to Equality, Diversity, Inclusion & Belonging We want you to be able to bring your best self to work every day which is why we take equality and inclusion seriously and hold ourselves to account for our actions. Our ambition is to be an employer of choice and provide a great place to work for all our people. We believe that by working together diverse people with different experiences develop the most innovative ideas. Equality, diversity and inclusion is at the heart of how we improve quality of life and we work closely with our people across six ED&I Workstreams: Age, Disability, Faith, Gender, LGBT+ and Race. A diverse and skilled workforce is essential to our success. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We Are Looking For 2+ years of expertise in software development with one or more of the general programming languages (e.g., Python, Java, C/C++, Go). Experience in Python and Django is recommended. Deep understanding of how to build an application with optimized RESTful APIs. Knowledge of a web framework like Django or similar with ORM or multi-tier, multi-DB-based data-heavy web application development will help your profile stand out. Knowledge of Gen AI tools and technologies is a plus. Sound knowledge of SQL queries & DB like PostgreSQL(must) or MySQL. Working knowledge of NoSQL DBs (Elasticsearch, Mongo, Redis, etc.) is a plus. Knowledge of graph DB like Neo4j or AWS Neptune adds extra credits to your profile. Knowing queue-based messaging frameworks like Celery, RQ, Kafka, etc., and distributed system understanding will be advantageous. Understands a programming language's limitations to exploit the language behavior to the fullest potential. Understanding of accessibility and security compliances Ability to communicate complex technical concepts to both technical and non- technical audiences with ease Diversity in skills like version control tools, CI/CD, cloud basics, good debugging skills, and test-driven development will help your profile stand out. Skills:- Python, Java and SQL Show more Show less

Posted 3 weeks ago

Apply

Exploring Elasticsearch Jobs in India

Elasticsearch is a powerful search and analytics engine used by businesses worldwide to manage and analyze their data efficiently. In India, the demand for Elasticsearch professionals is on the rise, with many companies seeking skilled individuals to work on various projects involving data management, search capabilities, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and have a high demand for Elasticsearch professionals.

Average Salary Range

The salary range for Elasticsearch professionals in India varies based on experience and skill level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career path in Elasticsearch may involve starting as a Junior Developer, moving on to become a Senior Developer, and eventually progressing to a Tech Lead position. With experience and expertise, one can also explore roles such as Solution Architect or Data Engineer.

Related Skills

Apart from Elasticsearch, professionals in this field are often expected to have knowledge of the following skills: - Apache Lucene - Java programming - Data modeling - RESTful APIs - Database management systems

Interview Questions

  • What is Elasticsearch and how does it differ from traditional databases? (basic)
  • Explain the purpose of an inverted index in Elasticsearch. (medium)
  • How does sharding work in Elasticsearch and why is it important? (medium)
  • What are the different types of queries supported by Elasticsearch? (basic)
  • How can you improve the performance of Elasticsearch queries? (medium)
  • What is the role of analyzers in Elasticsearch? (basic)
  • Explain the concept of mapping in Elasticsearch. (medium)
  • How does Elasticsearch handle scalability and high availability? (medium)
  • What is the significance of the "_source" field in Elasticsearch documents? (basic)
  • How does Elasticsearch handle full-text search? (medium)
  • What is the purpose of the "cluster" in Elasticsearch? (basic)
  • Explain the role of the "query DSL" in Elasticsearch. (medium)
  • How can you monitor the performance of an Elasticsearch cluster? (medium)
  • What are the different types of aggregations supported by Elasticsearch? (medium)
  • How does Elasticsearch handle document versioning? (medium)
  • What are the common data types supported by Elasticsearch? (basic)
  • How can you handle security in Elasticsearch? (medium)
  • Explain the concept of "indexing" in Elasticsearch. (basic)
  • What is the significance of the "refresh" interval in Elasticsearch? (basic)
  • How can you create a backup of an Elasticsearch cluster? (medium)
  • How does Elasticsearch handle conflicts during document updates? (medium)
  • Explain the concept of "relevance" in Elasticsearch search results. (medium)
  • How can you integrate Elasticsearch with other tools or platforms? (medium)
  • What are the key considerations for performance tuning in Elasticsearch? (advanced)

Closing Remark

As you explore job opportunities in Elasticsearch in India, remember to continuously enhance your skills and knowledge in this field. Prepare thoroughly for interviews and showcase your expertise confidently. With the right mindset and preparation, you can excel in your Elasticsearch career and contribute significantly to the tech industry in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies