Jobs
Interviews

58 Apache Zookeeper Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Description: Standing up and administer on premise Kafka cluster. Ability to architect and create reference architecture for kafka Implementation standards Provide expertise in Kafka brokers, zookeepers, Kafka connect, schema registry, KSQL, Rest proxy and Kafka Control center. Ensure optimum performance, high availability and stability of solutions. Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices. Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Provide administration and operations of the Kafka platform like provisioning, access lists Kerberos and SSL configurations. Use automation tools like provisioning using Docker, Jenkins and GitLab. Ability to perform data related benchmarking, performance analysis and tuning. Strong skills in In-memory applications, Database Design, Data Integration. Involve in design and capacity review meetings to provide suggestion in Kafka usage. Solid knowledge of monitoring tools and fine tuning alerts on Splunk, Prometheus, Grafana ,Splunk. Setting up security on Kafka. Providing naming conventions, Backup & Recovery and problem determination strategies for the projects. Monitor, prevent and troubleshoot security related issues. Provide strategic vision in engineering solutions that touch the messaging queue aspect of the infrastructure QUALIFICATIONS Demonstrated proficiency and experience in design, implementation, monitoring, and troubleshooting Kafka messaging infrastructure. Hands on experience on recovery in Kafka. 2 or more years of experience in developing/customizing messaging related monitoring tools/utilities. Good Scripting knowledge/experience with one or more (ex. Chef, Ansible, Terraform). Good programming knowledge/experience with one or more languages (ex. Java, node.js, python) Considerable experience in implementing Kerberos Security. Support 24*7 Model and be available to support rotational on-call work Competent working in one or more environments highly integrated with an operating system. Experience implementing and administering/managing technical solutions in major, large-scale system implementations. High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy. Ability to manage tasks independently and take ownership of responsibilities Ability to learn from mistakes and apply constructive feedback to improve performance Ability to adapt to a rapidly changing environment. Proven leadership abilities including effective knowledge sharing, conflict resolution, facilitation of open discussions, fairness and displaying appropriate levels of assertiveness. Ability to communicate highly complex technical information clearly and articulately for all levels and audiences. Willingness to learn new technologies/tool and train your peers. Proven track record to automate.

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Database Cloud Engineer at Salesforce, you will play a crucial role in ensuring the reliability, scalability, and performance of our vast cloud database infrastructure. Your responsibilities will involve architecting and operating resilient, secure, and performant database environments across public cloud platforms such as AWS and GCP. Collaborating across various teams, you will deliver cloud-native reliability solutions at a massive scale, contributing to one of the largest SaaS platforms globally. The CRM Database Sustaining Engineering team is a fast-paced and dynamic global team responsible for delivering and supporting databases and their cloud infrastructure to meet the substantial growth needs of the business. In this role, you will work closely with other engineering teams to deliver innovative solutions in an agile, dynamic environment. Collaboration with Application, Systems, Network, Database, and Storage teams is key to your success. As part of the Global Team, you will be engaged in 24*7 support responsibilities within Europe, requiring occasional flexibility in working hours to align globally. You will be immersed in managing Salesforce cloud databases running on cutting-edge cloud technology and ensuring their reliability. Job Requirements: - Bachelor's degree in Computer Science or Engineering, or equivalent experience. - Minimum of 8+ years of experience as a Database Engineer or similar role. - Expertise in Database and SQL performance tuning for relational databases. - Knowledge and hands-on experience with Postgres database is a plus. - Deep knowledge of at least two relational databases, including Oracle, PostgreSQL, and MySQL. - Working knowledge of cloud platforms like AWS or GCP is highly desirable. - Experience with cloud technologies such as Docker, Spinnaker, Terraform, Helm, Jenkins, and GIT. Exposure to Zookeeper fundamentals and Kubernetes is highly desirable. - Proficiency in SQL and at least one procedural language like Python, Go, or Java. Basic understanding of C is preferred. - Strong problem-solving skills and experience with Production Incident Management and Root Cause analysis. - Experience with mission-critical distributed systems service and supporting Database Production Infrastructure with 24x7x365 responsibilities. - Exposure to a fast-paced environment with a large-scale cloud infrastructure setup. - Excellent communication skills and attention to detail, with a proactive and self-starting approach. Preferred Qualifications: - Hands-on DevOps experience, including CI/CD pipelines and container orchestration like Kubernetes, EKS, or GKE. - Cloud-native DevOps experience with CI/CD, EKS/GKE, and cloud deployments. - Familiarity with distributed coordination systems such as Apache Zookeeper. - Deep understanding of distributed systems, availability design patterns, and database internals. - Expertise in monitoring and alerting using tools like Grafana, Argus, or similar. - Automation experience with tools like Spinnaker, Helm, and Infrastructure as Code frameworks. - Ability to drive technical projects from ideation to execution with minimal supervision.,

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 week ago

Apply

2.0 - 4.0 years

7 - 12 Lacs

Bengaluru

Work from Office

YOUR IMPACT: OpenText is the market leader in Enterprise Information Management platforms and applications. As a Software Engineer you will utilize your knowledge and experience to perform systems analysis, research, maintenance, troubleshooting and other programming activities. You become a member of one of our Agile based project teams, focusing on product development. It is an exciting opportunity to design and implement solutions for enterprise level systems. OpenText Business Network is a modern cloud platform that helps manage the full data lifecycle, from information capture and exchange to integration and governance. Business Network solutions establish the necessary digital backbone for streamlined connectivity, secure collaboration, and real-time business intelligence across an expanding network of internal systems, cloud applications, trading partner systems and connected devices. WHAT THE ROLE OFFERS : Meeting with the software development team to discuss project definitions and goals. Analyze user and system requirements for the software product Preparing high-level & low-level design documents and involve in development of software Provide hands-on technical leadership and guidance to junior engineers. Participates in and drives design sessions in the team. Writing excellent code following industry best practices. Write efficient code based on feature specifications Designing software database architecture Testing and debugging software applications Validating the functionality and security of the application. Respond promptly and professionally to bug reports and customer escalations. Perform the job with minimal assistance. Development, deployment, and support in production. Works with Quality Assurance to transfer knowledge and develop the test strategy and validate the test plan Technical risk assessment, problem solving, early risk notification, report progress and status to manager and/or Scrum master. Troubleshooting Production issues within the defined SLAs WHAT YOU NEED TO SUCCEED : Bachelors degree (Computer Science preferred) with 2+ years of experience in software development. Ability to take direction and work with minimal supervision. Ability to work in a deadline driven environment and respond creatively to pressure. Ability to work on multiple projects simultaneously. Excellent analytical skills Hands-on expertise in OOPs, Java/J2EE Experience working on at least one Application server (Tomcat, BEA Weblogic, IBM Websphere, JBoss) Experience in Database design and strong knowledge of SQL/PLSQL Experience in designing/development (design patterns) and testing of enterprise class systems. Experience in UI design and development using technologies like Angular/React Experience in Spring, Spring boot, Hibernate, RESTFul Services, microservices. Strong communication skills - verbal, written and listening Excellent trouble-shooting skills. Desired Skills: Experience in Hibernate, JSP/Servlets, JMS, XSLT, XML basics, XQuery, Kotlin Experience in Solr/Zookeeper is an added advantage Experience in Rest Security like OAuth, JWT will be an added advantage

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Kannur

Work from Office

Role Purpose Required Skills: 5+Years of experience in system administration, application development, infrastructure development or related areas 5+ years of experience with programming in languages like Javascript, Python, PHP, Go, Java or Ruby 3+ years of in reading, understanding and writing code in the same 3+years Mastery of infrastructure automation technologies (like Terraform, Code Deploy, Puppet, Ansible, Chef) 3+years expertise in container/container-fleet-orchestration technologies (like Kubernetes, Openshift, AKS, EKS, Docker, Vagrant, etcd, zookeeper) 5+ years Cloud and container native Linux administration /build/ management skills Key Responsibilities: Hands-on design, analysis, development and troubleshooting of highly-distributed large-scale production systems and event-driven, cloud-based services Primarily Linux Administration, managing a fleet of Linux and Windows VMs as part of the application solutions Involved in Pull Requests for site reliability goals Advocate IaC (Infrastructure as Code) and CaC (Configuration as Code) practices within Honeywell HCE Ownership of reliability, up time, system security, cost, operations, capacity and performance-analysis Monitor and report on service level objectives for a given applications services. Work with the business, Technology teams and product owners to establish key service level indicators. Ensuring the repeatability, traceability, and transparency of our infrastructure automation Support on-call rotations for operational duties that have not been addressed with automation Support healthy software development practices, including complying with the chosen software development methodology (Agile, or alternatives), building standards for code reviews, work packaging, etc. Create and maintain monitoring technologies and processes that improve the visibility to our applications' performance and business metrics and keep operational workload in-check. Partnering with security engineers and developing plans and automation to aggressively and safely respond to new risks and vulnerabilities. Develop, communicate, collaborate, and monitor standard processes to promote the long-term health and sustainability of operational development tasks.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Gurugram

Work from Office

Job Title : Kafka Integration Specialist Job Description : We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

Salesforce is the global leader in customer relationship management (CRM) software, pioneering the shift to cloud computing. Today, Salesforce delivers the next generation of social, mobile, and cloud technologies to help companies revolutionize the way they sell, service, market, and innovate, enabling them to become customer-centric organizations. As the fastest-growing enterprise software company in the top 10, Salesforce has been recognized as the World's Most Innovative Company by Forbes and as one of Fortune's 100 Best Companies to Work For. The CRM Database Sustaining Engineering Team at Salesforce is responsible for deploying and managing some of the largest and most trusted databases globally. Customers rely on this team to ensure the safety and high availability of their data. As a Database Cloud Engineer at Salesforce, you will have a mission-critical role in ensuring the reliability, scalability, and performance of Salesforce's extensive cloud database infrastructure. You will contribute to powering one of the largest Software as a Service (SaaS) platforms globally. We are seeking engineers with a DevOps mindset and deep expertise in databases to architect and operate secure, resilient, and high-performance database environments across public cloud platforms such as AWS and GCP. Collaboration across various domains including systems, storage, networking, and applications is essential to deliver cloud-native reliability solutions at a massive scale. The CRM Database Sustaining Engineering team is a dynamic and fast-paced global team that delivers and supports databases and cloud infrastructure to meet the evolving needs of the business. In this role, you will collaborate with other engineering teams to deliver innovative solutions in an agile and dynamic environment. As part of the Global Team, you will engage in 24/7 support responsibilities within Europe, requiring occasional flexibility in working hours to align globally. You will be responsible for the reliability of Salesforce's cloud database, running on cutting-edge cloud technology. **Job Requirements:** - Bachelor's in Computer Science or Engineering, or equivalent experience. - Minimum of 8+ years of experience as a Database Engineer or in a similar role. - Expertise in Database and SQL performance tuning in at least one relational database. - Knowledge and hands-on experience with Postgres database is advantageous. - Broad and deep knowledge of at least two relational databases, including Oracle, PostgreSQL, and MySQL. - Working knowledge of cloud platforms such as AWS or GCP is highly desirable. - Experience with cloud technologies like Docker, Spinnaker, Terraform, Helm, Jenkins, GIT, etc. Exposure to Zookeeper fundamentals and Kubernetes is highly desirable. - Proficiency in SQL and at least one procedural language such as Python, Go, or Java, with a basic understanding of C. - Excellent problem-solving skills and experience with Production Incident Management and Root Cause analysis. - Experience with mission-critical distributed systems service, including supporting Database Production Infrastructure with 24x7x365 support responsibilities. - Exposure to a fast-paced environment with a large-scale cloud infrastructure setup. - Strong speaking, listening, and writing skills, attention to detail, and a proactive self-starter. **Preferred Qualifications:** - Hands-on DevOps experience including CI/CD pipelines and container orchestration (Kubernetes, EKS/GKE). - Cloud-native DevOps experience (CI/CD, EKS/GKE, cloud deployments). - Familiarity with distributed coordination systems like Apache Zookeeper. - Deep understanding of distributed systems, availability design patterns, and database internals. - Monitoring and alerting expertise using tools like Grafana, Argus, or similar. - Automation experience with tools like Spinnaker, Helm, and Infrastructure as Code frameworks. - Ability to drive technical projects from idea to execution with minimal supervision.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Job Summary: We are looking for a skilled Apache Solr Engineer to design, implement, and maintain scalable and high-performance search solutions. The ideal candidate will have hands-on experience with Solr/SolrCloud, strong analytical skills, and the ability to work in cross-functional teams to deliver efficient search functionalities across enterprise or customer-facing applications. Experience: 4–8 years Roles and Responsibilities Key Responsibilities: Design, develop, and maintain enterprise-grade search solutions using Apache Solr and SolrCloud . Develop and optimize search indexes and schema based on use cases like product search, document search, or order/invoice search. Integrate Solr with backend systems, databases and APIs. Implement full-text search , faceted search , auto-suggestions , ranking , and relevancy tuning . Optimize search performance, indexing throughput, and query response time. Ensure data consistency and high availability using SolrCloud and Zookeeper (cluster coordination & configuration management). Monitor search system health and troubleshoot issues in production. Collaborate with product teams, data engineers, and DevOps teams for smooth delivery. Stay up to date with new features of Apache Lucene/Solr and recommend improvements. Required Skills & Qualifications: Strong experience in Apache Solr & SolrCloud Good understanding of Lucene , inverted index , analyzers , tokenizers , and search relevance tuning . Proficient in Java or Python for backend integration and development. Experience with RESTful APIs , data pipelines, and real-time indexing. Familiarity with Zookeeper , Docker , Kubernetes (for SolrCloud deployments). Knowledge of JSON , XML , and schema design in Solr. Experience with log analysis , performance tuning , and monitoring tools like Prometheus/Grafana is a plus. Exposure to e-commerce or document management search use cases is an advantage. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Experience with Elasticsearch or other search technologies is a plus. Working knowledge of CI/CD pipelines and cloud platforms ( Azure).

Posted 2 weeks ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Noida

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Pune

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

18 - 22 Lacs

Navi Mumbai, Mumbai (All Areas)

Work from Office

1 Education : B.E./B.Tech/MCA in Computer Science 2 Experience : Must have 7+ years relevant experience in the field of database Administration. 3 Mandatory Skills/Knowledge Candidate should be technically sound in multiple distribution like Cloudera, Confluent, open source Kafka. Candidate should be technically sound in Kafka, Zookeeper Candidate should well versed in capacity planning and performance tuning. Candidate should be expertise in implementation of security in ecosystem Hadoop Security ranger , Kerberos ,SSL Candidate should be expertise in dev ops tool like ansible, Nagios, shell scripting python , Jenkins, Ansible, Git, Maven to implement automation . Candidate should able to Monitor, Debug & RCA for any service failure. Knowledge of network infrastructure for eg. TCP/IP, DNS, Firewall, router, load balancer. Creative analytical and problem-solving skills Provide RCAs for critical & recurring incidents. Provide on-call service coverage within a larger group Good aptitude in multi-threading and concurrency concepts. 4 Preferred Skills/Knowledge Expert Knowledge of database administration and architecture Hands on Operating System Commands Kindly Share CVs on snehal.sankade@outworx.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Ahmedabad

Work from Office

Job Title : Kafka Integration Specialist Job Description : We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 13 Lacs

Noida, Gurugram, Bengaluru

Hybrid

This role is for a client of ours. Prior experience of Contract work is preferred. Role Type: Contract Contract Duration: 6 Months (Extendable) Location: Gurgaon/ Noida/ Bangalore Work Mode: Hybrid Max Budget: 1.1 Lac/Month (depending on candidate) Required Candidate profile Experience of 5+ yrs in Java, esp. in Order and Execution Management, Trading systems SDLC, MySQL, Spring.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Chennai

Work from Office

Job Title : Kafka Integration Specialist Job Description : We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. SENIOR ENGINEER KAFKA STREAMING PLATFORM - Heres a smattering of approaches important to us and the technologies we use Everything we do is as-code in version control. We dont like clicking buttons or doing things manually. All development or infra config changes go through a pull-request process, so youll always have a say to thumbs up or down things you catch. Everything should have test cases and they go through a continuous integration process. We understand the importance of logs and metrics, so having visibility to things you need to see to do your job isnt an issue. And if you need to add more metrics or see more logs, its within our control to improve that. We try to own as much of the platform as we reasonably can. You dont need to rely on other teams outside our own to improve the stack or change the way we do things. Kafka/Streaming Stack CodeSpring Boot (Java/Kotlin), Restful API, Golang PlatformApache Kafka 2.x, TAP, GCP, Ansible, Terraform, Docker, Vela Alerting/MonitoringGrafana, Kibana, ELK stack As a Senior Engineer on Targets Streaming Platform Team, you'll . . Help build out the Kafka/Streaming capability in India Write and deploy code that enhances the Kafka platform Designs infrastructure solutions that support automation, self- provisioning, product health, security/compliance, resiliency, zero- call aspiration, and are Guest/Team Member experience focused Troubleshoot and resolve platform operational issues Requirements 4+ years of experience developing in JVM-based languages (e.g. Java/Kotlin) Ability to apply skills to solve problems, aptitude to learn additional technologies or go deeper in an area. Has good basic programming/infrastructure skills and is able to quickly gather the skills necessary to accomplish the task at hand. Intermediate knowledge and skills associated with infrastructure- based technologies Works across the team to recommend solutions that are in accordance with accepted testing frameworks. Experience with modern platforms and CI/CD stacks (e.g. GitHub, Vela, Docker) Highly productive, self-starter and self-motivated Passionate about staying current with new and evolving technologies Desired 4+ years of experience developing high quality applications and/or supporting critical enterprise platforms Experience with Kafka, Containers(k8s), Zookeeper, worked with any one of the major public cloud providers ( GCP/AWS/AZURE) Familiarity with Golang and microservices architecture is a big plus Participate in day-to-day support requests by performing the admin tasks. Install and maintain standard Kafka componentsControl Center, ZooKeeper, and Brokers Strong understanding of infrastructure/software and how these systems are secured, analyzed, and investigated. Is a contact point for their team and is able to help answer questions for other groups and/or management Partner with teams to prioritize and improve services throughout the software development lifecycle Personal or professional experience contributing to open-source projects Innovative mindset willingness to push new ideas into the company Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture https://india.target.com/life-at-target/belonging

Posted 2 weeks ago

Apply

5.0 - 9.0 years

3 - 6 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

Job Title : Kafka Integration Specialist We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Mumbai

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Kolkata

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted 3 weeks ago

Apply

6.0 - 8.0 years

6 - 13 Lacs

Navi Mumbai

Work from Office

Role & responsibilities Candidate should be technically sound in multiple distribution like Cloudera, Confluent, open source Kafka etc. Candidate should be technically sound in Kafka, Zookeeper Candidate should well verse in capacity planning and performance tuning. Candidate should be expertise in implementation of security in ecosystem Hadoop Security ranger, Kerberos, SSL etc. 5. Candidate should be expertise in dev ops tool like ansible, Nagios, shell scripting python, Jenkins, Ansible, Git, Maven etc. to implement automation. Candidate should able to Monitor, Debug & RCA for any service failure. Knowledge of network infrastructure for e.g. TCP/IP, DNS, Firewall, router, load balancer etc. Creative analytical and problem-solving skills Provide RCAs for critical & recurring incidents. Provide on-call service coverage within a larger group Good aptitude in multi-threading and concurrency concepts. Preferred candidate profile 1. Prepare SOPs, Gather Asset information and create architecture diagram. 2. Prepare operational Manuals. DB Admin L3 will provide Subject Matter Expertise for DB administrators, resolve complex issues and guide and coach L2 Engineers. 3. Provide recommendation for service improvement. Desired Certifications 1. Any Associate level Cloud certifications from the proposed CSP is mandatory. 2. Kafka Certification. If Interested Connect with us on this Number - 8369887673

Posted 3 weeks ago

Apply

8.0 - 12.0 years

20 - 30 Lacs

Bengaluru

Remote

Role & responsibilities 1. Strong experience in IBM sterling Filegateway, Integrator, Global Mailbox, Control center 2. Strong experience in IBM sterling Installation, upgrade, configuration activities. 3. Troubleshoot issues related to IBM sterling Business process, system and network connectivity, Cassandra, MQ, Zookeeper and other components. 4. Knowledge on Linux, Windows commands 5. Knowledge on Python, powershell or shell scripting. 6. Knowledge on MFT products like Global Scape and Goanywhere (Added Advantage) 7. SQL query and BPML knowledge is a plus 8. Open for 24/7 support and on-call Preferred candidate profile

Posted 3 weeks ago

Apply

10.0 - 17.0 years

30 - 45 Lacs

Pune, Bengaluru

Hybrid

MTS 2/ MTS 3/MTS- 4/ Senior member of technical staff - (Datapath, filesystem, networking, storage) The Opportunity The Stargate team is looking for individuals who are in sync with our values and are passionate about distributed system software development. This is an opportunity to work with software that powers Nutanix Enterprise Cloud. You will get a chance to apply and broaden your expertise in storage, virtualization, distributed systems, cloud services, k8s and AI systems storage. Container Attached Storage is a Kubernetes-native, software-defined storage solution that allows k8s admin and app developers to manage storage with an application-centric approach . Cloud Native AOS offers Container Attached Storage using Kubernetes pods to run AOS distributed storage fabric , enabling seamless integration with cloud-native stateful workloads. The platform supports dynamic provisioning, thin provisioning, data efficiency, and application-centric snapshots. Cloud Native AOS can be used in both hyperconverged and disaggregated storage environments in a hybrid cloud environment. The stateful application Pods use the Nutanix CSI driver to consume storage entities that the AOS Pods present as Persistent Volumes. Much like an on-premise HCI setup, Cloud Native AOS too provides all the core Nutanix data management and copy data management functionalities. About the Team At Nutanix, you will be joining the Cloud Data Platform (CDP) team, a vibrant and innovative group made up of talented individuals located in both the US and India. Our team culture embraces collaboration and creativity, encouraging everyone to contribute their ideas and perspectives. We believe that a diverse and inclusive environment fosters innovation, and we strive to maintain a supportive atmosphere where all team members can thrive. You will report to the Director of Engineering, who is dedicated to fostering professional growth and enabling team success. Our work setup is hybrid, requiring you to come into the office 23 days a week as part of a balanced approach that blends in-person collaboration with the flexibility of remote work. Your Role Architect, design and develop storage software for a converged computing+storage platform for the software-defined data center. Develop a deep understanding of complex distributed systems, and design innovative solutions for customer requirements. Work on performance, scaling out and resiliency of distributed storage systems. Work closely with development, test, documentation and product management teams to deliver high-quality products in a fast-paced environment. Engage with customers and support when needed to solve production issues What You Will Bring Fully hands-on. Love of programming and rock-solid in one or more languages: C++, go, python, Kernel programming (optional) 5 yrs to 20 yrs experience Extensive knowledge of UNIX/Linux OS, kubernetes. Development experience in file systems, operating systems, database back-ends, distributed storage systems, Cloud-based storage technologies. Develop a deep understanding of complex distributed systems. Resolve issues related to large-scale data organization, algorithm scalability, concurrent programming, asynchronous communication, efficient concurrency, reliability, DR and fault tolerance. Improve performance, scale-out and resiliency of our distributed control plane Work closely with other development teams, testers, documentation writers and product management to deliver high-quality products in a fast-paced environment Engage with customers and support when needed to solve production issues Understanding of the storage access protocols and features viz. NFS/CIFS/S3/Cloud Software development life-cycle like git, code reviews and Jira Experience with Hadoop, MapReduce, Cassandra, Zookeeper and other large-scale distributed systems preferred Familiarity with OS internals, concepts of distributed data management, and design/implementation tradeoffs in building clustered, high- performance, fault-tolerant distributed systems software Strong fundamentals in TCP/IP Efficiency in designing high performant and low-latency modules Possess excellent written and verbal communication skills Experience working with virtualization technologies like VMware, Hyper-V, Xen. VMware preferred Familiarity with x86 architecture, virtualization and/or storage management. A Bachelor's degree in Computer Science or related field is required. Advanced degree in Computer Science preferred Work Arrangement Hybrid: This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager.

Posted 3 weeks ago

Apply

10.0 - 14.0 years

30 - 40 Lacs

Bengaluru

Work from Office

The Opportunity Nutanix has disrupted the multi-billion dollar virtualization market by pioneering the first converged compute & storage virtualization appliance that can incrementally scale out to manage petabytes of data while running tens of thousands of virtual machines. We strive to bring simplicity to data center management and constantly challenge ourselves to simplify complex systems. At Nutanix, we are trying to build the next-generation platform to help enterprises model and develop applications; and encapsulate the application architecture and deployment as code. We aim to provide application lifecycle operations such as build, deploy, start, stop, upgrade, and retire, out of the box. We want enterprises to be able to manage and move their application workloads, across and between bare metal, VMs on-premise or cloud, or containers. About the Team Nutanix Cloud Manager (NCM) is a key portfolio within the Nutanix stack. As we expand our offerings to support modern AI workloads, we are building a unified Hybrid Cloud solution that enables customers to run applications seamlessly while automating Day-0 and Day-2 operations, defining security policies, and monitoring compliance metrics in a single platform. In this role, you will contribute to delivering NCM's capabilities across Application Automation, Cost Management, and Security products. Your Role Hire, coach, and grow a high performing team of talented engineers with diverse skill-sets. Collaborate with the geo-distributed team to own and deliver projects end to end Communicate across functional teams and drive engineering initiatives Participate in architecture and technical design discussions and drive product direction & strategy Love for programming and extensive knowledge of UNIX/Linux Familiarity with x86 architecture, virtualization, containers and/or storage management Build and manage web scale applications Management Responsibilities Manage and grow existing 10+ person team, including tech leads Refine and grow existing processes or develop new ones to enable smooth functioning of engineering team with a focus on developer productivity Drive development of timely and high quality software releases What You Will Bring Working experience working with storage, networking, virtualization (Nutanix, VMWare, KVM) and/or cloud technologies (AWS, Azure, GCP) Familiarity with OS internals, concepts of distributed data management, web scale systems and proven ability in having built clustered, high-performance, fault-tolerant distributed application or systems software. Strong experience in building and managing web scale applications Experience in one of following programming languages: Python/GoLang/C/C++/Java Strong understanding of concurrency patterns, multithreading concepts and debugging techniques Working experience with virtualization and/or cloud technologies Experience with database (SQL & NoSQL) and messaging technologies (NATS, Kafka, or Pulsar) Experience with Hadoop, MapReduce, Cassandra, Zookeeper and other large-scale distributed database systems preferred Qualifications BS/ MS in Computer Science, Engineering or Equivalent 10+ Years of experience, 3+ Years of management experience Proven hands-on technical management Experience working in a high growth multinational company environment Experience in Agile methodologies Work Arrangement Hybrid: This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum of 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager.

Posted 3 weeks ago

Apply

10.0 - 14.0 years

30 - 40 Lacs

Bengaluru

Work from Office

The Opportunity We are looking for passionate developers to work on scalable distributed systems. You will be contributing to the design and development of scalable distributed systems covering various layers (Distributed storage layer, Control Plane and Management Plane) for both hybrid and multi-cloud environments. We are looking for individuals who are very passionate about technology and how it can be used to solve deep technical problems. The Disaster Recovery and Backup team is responsible for building next-generation data protection and disaster recovery solutions for hybrid/multi-cloud datacenters. The data protection software platforms enable customers to protect, replicate and recover workloads in a hybrid/multi-cloud environment. About the Team At Nutanix, you will be joining the Dr & Backup team, a dynamic and innovative group that spans across the US and India. Our team is dedicated to leveraging cutting-edge technologies to reshape the landscape of data backup and recovery solutions. We pride ourselves on fostering a culture where creativity and collaboration thrive, encouraging every team member to share their ideas and contribute to groundbreaking advancements. You will report to the Sr Engineering Manager, who emphasizes an empowering leadership style and encourages team members to take ownership of their work. Your role will operate in a hybrid setup, requiring you to be in the office 23 days a week to facilitate collaboration and team bonding. While we value in-person interactions, we also support flexibility in our work arrangements. Your Role Hire, coach, and grow a high-performing team of talented engineers with diverse skill sets. Collaborate with the geo-distributed team to own and deliver projects end-to-end Communicate across functional teams and drive engineering initiatives Participate in architecture and technical design and drive product direction & strategy Management Responsibilities Manage and grow an existing team. Refine and grow existing processes or develop new ones to enable the smooth functioning of the engineering team. Drive development of timely and high-quality software releases What You Will Bring Design tradeoffs in building clustered, high- performance, fault-tolerant distributed system software. Love of programming, ability, and passion to solve complex problems. Strong experience in C++ and systems programming. Python or Go would be an added bonus. Proven experience building scalable fault-tolerant distributed or cloud-native systems Familiarity with concepts of disaster recovery , data protection , distributed data storage , clustered, high-performance, and fault-tolerant distributed system software. Experience working in an Agile/Scrum development process, including DevOps and CI/CD. Experience with Hadoop, MapReduce, Cassandra, Zookeeper, and other large-scale distributed systems is preferred. Have a bias for action and be able to rapidly implement and iterate solutions to complex technical problems spanning across multiple teams and technologies. Comfortable working in a fast-moving, agile environment. Qualifications and Experience BS/ MS in Computer Science or Engineering 10+ Years of experience, 2+ Years of management experience Proven hands-on technical management Experience working in a high-growth multinational company environment.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

9 - 13 Lacs

Bengaluru

Work from Office

As a Technical Specialist, you will develop and enhance Optical Network Management applications, leveraging experience in Optical Networks. You will work with fault supervision, and performance monitoring. Collaborating in an agile environment, you will drive innovation, optimize efficiency, and explore UI technologies like React. Your role will focus on designing, coding, testing, and improving network management applications to enhance functionality and customer satisfaction. You have: Bachelor's degree and 8 years of experience (or equivalent) in Optics Network. Hands-on working experience with CORE JAVA, Spring, Kafka, Zookeeper, Hibernate, and Python. Working knowledge of RDBMS, PL-SQL, Linux, Docker, and database concepts. Exposure to UI technologies like REACT. It would be nice if you also had: Domain knowledge in OTN, Photonic network management. Strong communication skills and the ability to manage complex relationships. Develop software for Network Management of Optics Division products, including Photonic/WDM, Optical Transport, SDH, and SONET. Enable user control over network configuration through Optics Network Management applications. Utilize Core Java, Spring, Kafka, Python, and RDBMS to build high-performing solutions for network configuration. Interface Optics Network Management applications with various Network Elements, providing a user-friendly graphical interface and implementing algorithms to simplify network management and reduce OPEX. Deploy Optics Network Management applications globally, supporting hundreds of installations for customers. Contribute to new developments and maintain applications as part of the development team, focusing on enhancing functionality and customer satisfaction.

Posted 4 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies