Home
Jobs

7624 Kafka Jobs - Page 38

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

3 Lacs

Noida

On-site

GlassDoor logo

Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. Job Profile: Deployment, Monitoring of complex Kubernetes/micro service architecture based applications on any Cloud Provider (Azure/AWS/GCP) Skill Set Required : Ability to demonstrate solid skills in Azure/AWS/GCP, Kubernetes , Unix/Linux Platform. 5-7 years of total experience, mainly on Devops Role. Ability to demonstrate knowledge about Cluster, Cloud/VM based solution deployment, and management, including knowledge about networking, servers and storages. Experience in DevOps with Kubernetes. Strong knowledge of CI/CD tools (Jenkins, Bamboo etc.) Experience in cloud platforms and infrastructure automation. Expertise on Python/Go or similar scripting/language. Must have completed minimum one project end-to-end in a technical DevOps role, preferably in a global organization. Practical understanding of Ansible, Docker, and implementation of the solutions based upon these tools is preferred. Ability to handle escalations Proven ability to learn and apply new skills and processes quickly and train others in team. Demonstrated experience as individual contributor with customer focus and service orientation with solid leadership and coaching skills. Ability to communicate courteously and effectively with customers, third-party vendors and partners. Proficiency with Customer Relationship Management (CRM) software such as JIRA and Confluence. Excellent written and verbal communication skills in English. Desirable: Exposure on DataDog, Kafka, Keycloak or similar solutions. Good Exposure on Terraform/Terragrunt. At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now!

Posted 4 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

hackajob is collaborating with Wipro to connect them with exceptional tech professionals for this role. Title: Datascientist with GenAI-2 Requisition ID: 45466 City: Bengaluru Country/Region: IN Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Do Research, design, develop, and modify computer vision and machine learning. algorithms and models, leveraging experience with technologies such as Caffe, Torch, or TensorFlow. - Shape product strategy for highly contextualized applied ML/AI solutions by engaging with customers, solution teams, discovery workshops and prototyping initiatives. - Help build a high-impact ML/AI team by supporting recruitment, training and development of team members. - Serve as evangelist by engaging in the broader ML/AI community through research, speaking/teaching, formal collaborations and/or other channels. Knowledge & Abilities: - Designing integrations of and tuning machine learning & computer vision algorithms - Research and prototype techniques and algorithms for object detection and recognition - Convolutional neural networks (CNN) for performing image classification and object detection. - Familiarity with Embedded Vision Processing systems - Open source tools & platforms - Statistical Modeling, Data Extraction, Analysis, - Construct, train, evaluate and tune neural networks Mandatory Skills: One or more of the following: Java, C++, Python Deep Learning frameworks such as Caffe OR Torch OR TensorFlow, and image/video vision library like OpenCV, Clarafai, Google Cloud Vision etc Supervised & Unsupervised Learning Developed feature learning, text mining, and prediction models (e.g., deep learning, collaborative filtering, SVM, and random forest) on big data computation platform (Hadoop, Spark, HIVE, and Tableau) *One or more of the following: Tableau, Hadoop, Spark, HBase, Kafka Experience: - 2-5 years of work or educational experience in Machine Learning or Artificial Intelligence - Creation and application of Machine Learning algorithms to a variety of real-world problems with large datasets. - Building scalable machine learning systems and data-driven products working with cross functional teams - Working w/ cloud services like AWS, Microsoft, IBM, and Google Cloud - Working w/ one or more of the following: Natural Language Processing, text understanding, classification, pattern recognition, recommendation systems, targeting systems, ranking systems or similar Nice to Have: - Contribution to research communities and/or efforts, including publishing papers at conferences such as NIPS, ICML, ACL, CVPR, etc. Education BA/BS (advanced degree preferable) in Computer Science, Engineering or related technical field or equivalent practical experience Wipro is an Equal Employment Opportunity employer and makes all employment and employment-related decisions without regard to a person's race, sex, national origin, ancestry, disability, sexual orientation, or any other status protected by applicable law Product and Services Sales Manager ͏ ͏ ͏ ͏ Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 4 days ago

Apply

5.0 - 7.0 years

3 - 4 Lacs

Calcutta

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Strong hand-on experience in .NET, Angular, Typescript strong hands-on experience in .java Spring boot Strong Experience in AKS and Containers Experience in API Gateway, database, Microservice design Experience in Kafka Mandatory skill sets: .NET with Angular Preferred skill sets: Azure Functions Kafka Devops Years of experience required: 5-7 Years Education qualification: B.Tech/B.E. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Angular Optional Skills Apache Kafka, DevOps, Microsoft Azure Functions Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 4 days ago

Apply

1.5 - 2.0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

**Node.js Backend Developer ** Company Name: Klizo Solutions Pvt. Ltd. ** Company Website: www.klizos.com ** Location: Astra Tower, Newtown, Akanksha More (Near City Centre 2 ). ** No. of vacancies: 2 nos. ** Job Type: IN OFFICE, Full-time ** Working Days: Monday to Friday (5 Days) ** Shift Timing : 12:00 pm to 09:00 pm ( Needs to be flexible ) ** Week Off: Saturday & Sunday (Fixed off) **Experience: 1.5–2 Years ** Salary: 20K-30K (based on current salary, experience and interview performance) Job Summary: We are looking for a skilled Node.js Developer with 1.5 - 2 years of experience to join our innovative team. The ideal candidate will be responsible for designing, developing, and maintaining scalable and robust server-side applications, APIs, and microservices. You will leverage your expertise in Node.js and TypeScript, working closely with our front-end teams to ensure seamless integration and deliver exceptional full-stack user experiences. Key Responsibilities: Develop, implement, and maintain high-performance, scalable, and secure backend services and RESTful APIs using Node.js and TypeScript. Design and manage database schemas, ensure data integrity, and optimize database interactions, specifically with MongoDB . Collaborate closely with front-end developers to define API contracts, ensure smooth data flow, and integrate backend services with user-facing features. Implement robust authentication, authorization, and data security measures. Optimize backend applications for maximum speed, scalability, and efficiency, identifying and resolving performance bottlenecks. Build reusable code and libraries for future backend development, adhering to best practices. Conduct thorough testing of backend components, including unit, integration, and API testing, to ensure reliability. Stay up-to-date with the latest Node.js ecosystem trends, security best practices, and new technologies to continuously improve our stack. Maintain clear and comprehensive documentation for APIs, services, and backend processes. Debug and resolve issues in backend applications in a timely and efficient manner. Participate actively in Agile/Scrum development cycles, including sprint planning, daily stand-ups, and retrospectives. Must-Have Skills and Qualifications: 1.5 - 2 years of professional experience as a Node.js Developer with a portfolio demonstrating relevant backend projects. Proficiency in Node.js and TypeScript . Strong understanding and hands-on experience with MongoDB , including schema design and query optimization. Experience with payment gateway integrations, specifically Stripe and Paypal . Familiarity with implementing social login (e.g., OAuth, JWT, Firebase Authentication). Proven experience designing, developing, and implementing RESTful APIs . Familiarity with version control systems, such as Git . Knowledge of backend performance optimization and scalability techniques. Understanding of microservices architecture concepts and distributed systems. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication and teamwork skills, with the ability to articulate complex technical concepts. Ability to work independently, prioritize tasks, and manage time effectively in a fast-paced environment. Preferred Skills: Strong experience with a popular Node.js framework, particularly NestJS (e.g., Express.js, NestJS). Experience with ORMs/ODMs (e.g., Sequelize, Mongoose) for other database types. Knowledge of message queues (e.g., RabbitMQ, Apache Kafka) and caching mechanisms like Redis for asynchronous communication and performance enhancement. Contribution to open-source projects or active participation in developer communities. Interested candidates are requested to send us their updated CV through indeed.com or email us at kuheli@klizos.com /a.mandal@klizos.com for scheduling interview with us. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Weekend availability Application Question(s): Current Take Home Monthly Salary? Expected Take Home Monthly Salary? Maximum Notice Period? Can you join immediately? Education: Bachelor's (Preferred) Experience: MongoDB: 2 years (Preferred) Node.js: 2 years (Preferred) TypeScript: 2 years (Preferred) Stripe and PayPal.: 2 years (Preferred) RESTful APIs.: 2 years (Preferred) Git: 2 years (Preferred) Language: English (Preferred) Work Location: In person

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience 5 to 12 Years This is a work-from-office (WFO) role. Must Have Experience in .NET Core and C# E xperience in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana) (Any One) hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes Required Expertise Technical Expertise and Skills: 5+ years of experience in software development, with a strong focus on .NET Core and C#. Deep expertise in multi-threaded programming, asynchronous programming, and handling concurrency in distributed systems. Extensive experience in designing and implementing domain-driven microservices with advanced architectural patterns like Clean Architecture or Vertical Slice Architecture. Strong understanding of event-driven systems, with knowledge of messaging frameworks such as Kafka, AWS SQS, or RabbitMQ. Proficiency in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana). Hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes. Expertise in Agile methodologies under Scrum practices. Solid knowledge of Git and version control best practices. Key Responsibilities System Design and Development Architect and develop real-time, domain-driven microservices using .NET Core to ensure scalability, modularity, and performance. Leverage multi-threaded programming techniques and asynchronous programming paradigms to build systems optimized for high-concurrency workloads. Implement event-driven architectures to enable seamless communication between distributed services, leveraging tools such as Kafka or AWS SQS. System Performance and Optimization Optimize applications for low-latency and high-throughput in trading environments, addressing challenges related to thread safety, resource contention, and parallelism. Design fault-tolerant systems capable of handling large-scale data streams and real-time events. Proactively monitor and resolve performance bottlenecks using advanced observability tools and techniques. Architectural Contributions Contribute to the design and implementation of scalable, maintainable architectures, including Clean Architecture, Vertical Slice Architecture, and CQRS. Collaborate with architects and stakeholders to align technical solutions with business requirements, particularly for trading and financial systems. Employ advanced design patterns to ensure robustness, fault isolation, and adaptability. Agile Collaboration Participate actively in Agile practices, including Scrum ceremonies such as sprint planning, daily stand-ups and retrospectives.. Collaborate with Product Owners and Scrum Masters to refine technical requirements and deliver high-quality, production-ready software. Code Quality and Testing Write maintainable, testable, and efficient code adhering to test-driven development (TDD) methodologies. Conduct detailed code reviews, ensuring adherence to best practices in software engineering, coding standards, and system architecture. Develop and maintain robust unit, integration, and performance tests to uphold system reliability and resilience. Monitoring and Observability Integrate Open Telemetry to enhance system observability, enabling distributed tracing, metrics collection, and log aggregation. Collaborate with DevOps teams to implement real-time monitoring dashboards using tools such as Prometheus, Grafana, and Elastic (Kibana). Ensure systems are fully observable, with actionable insights into performance and reliability metrics. Skills: vertical slice architecture,kibana,microservices,kubernetes,elastic (kibana),clean architecture,grafana,git,aws,agile,event-driven systems,multi-threaded programming,.net,c#,agile methodologies,rabbitmq,prometheus,kafka,asynchronous programming,ci/cd pipelines,telemetry,domain-driven microservices,ci/cd,open telemetry,aws sqs,.net core,docker Show more Show less

Posted 4 days ago

Apply

5.0 - 7.0 years

18 - 20 Lacs

Pune

Work from Office

Naukri logo

Required skills Experience in Data Warehousing domain Mandatory - Experience and deep knowledge of Spark, Kafka, open table, NOSQL and SQL DBs. Experience in Kubernetes Hands-on Experience in Java. microservices and K8S Should have hands-on coding experience Considered a plus: Monitoring tools like Prometheus and Grafana Working with Copilot writing prompts to enhance efficiency during development

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Software Engineering Senior Analyst ABOUT EVERNORTH: Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Excited to grow your career? We value our talented employees, and whenever possible strive to help one of our associates grow professionally before recruiting new talent to our open positions. If you think the open position you see is right for you, we encourage you to apply! Our people make all the difference in our success. We are looking for engineer to develop, optimize and fine-tune AI models for performance, scalability, and accuracy. In this role you will support the full software lifecycle of design, development, testing, and support for technical delivery. This role requires working with both onsite and offshore team members in properly defining testable scenarios based on requirements/acceptance criteria. Responsibilities Be hands-on in the design and development of robust solutions to hard problems, while considering scale, security, reliability, and cost Support other product delivery partners in the successful build, test, and release of solutions Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Understand the Business and the Application Architecture End to End Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. In-depth knowledge of single team business domain and the ability to express or communicate technical work in business value terminology. Firm grasp on design disciplines and architectural patterns and aligning and influencing the fellow team members in following them. Engaged in fostering and improving organizational culture. Qualifications Required Skills: Strong experience in C#, SOLID Design Principles/Patterns, OOP, Data Structures, ASP.NET Core, ASP.NET Web API, ReactJS, xUnit, TDD, Kafka, Microservices, Event-Driven Architecture, Azure (including Terraforms and AKS), Cosmos DB Knowledge of Service Oriented Architecture, SonarQube, CheckMarx Ability to speak/write fluently in English Experience with agile methodology including SCRUM. Experience with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Required Experience & Education: Software engineer (with 3-5 years of overall experience) with at-least 3 years in the key skills listed above Bachelor’s degree equivalent in Information Technology, Business Information Systems, Technology Management, or related field of study. Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Design, develop, test, and deploy scalable and resilient microservices using Java and Spring Boot. Collaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Should Be a Java Full Stack Developer. Bachelor's or Master's degree in Computer Science or related field. 6+ years of hands-on experience in JAVA FULL STACK - ANGULAR + JAVA SPRING BOOT Proficiency in Spring Boot and other Spring Framework components. Extensive experience in designing and developing RESTful APIs. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are seeking a skilled and experienced Developer with expertise in .net programming along with knowledge on LLM and AI to join our dynamic team. As a Contact Center Developer, you will be responsible for developing and maintaining contact center applications, with a specific focus on AI functionality. Your role will involve designing and implementing robust and scalable AI solutions, ensuring efficient agent experience. You will collaborate closely with cross-functional teams, including software developers, system architects, and managers, to deliver cutting-edge solutions that enhance our contact center experience. How will you make an impact? Develop, enhance, and maintain contact center applications with an emphasis on copilot functionality. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform system analysis, troubleshooting, and debugging to identify and resolve issues. Conduct regular performance monitoring and optimization of code to ensure optimal customer experiences. Maintain documentation, including technical specifications, system designs, and user manuals. Stay up to date with industry trends and emerging technologies in contact center, AI, LLM and .Net development and apply them to enhance our systems. Participate in code reviews and provide constructive feedback to ensure high-quality code standards. Deliver high quality, sustainable, maintainable code. Participate in reviewing design and code (pull requests) for other team members – again with a secure code focus. Work as a member of an agile team responsible for product development and delivery Adhere to agile development principles while following and improving all aspects of the scrum process. Follow established department procedures, policies, and processes. Adheres to the company Code of Ethics and CXone policies and procedures. Excellent English and experience in working in international teams are required. Have you got what it takes? BS or MS in Computer Science or related degree 5-8 years’ experience in software development. Strong knowledge of working and developing Microservices. Design, develop, and maintain scalable .NET applications specifically tailored for contact center copilot solutions using LLM technologies. Good understanding of .Net and design patterns and experience in implementing the same Experience in developing with REST API Integrate various components including LLM tools, APIs, and third-party services within the .NET framework to enhance functionality and performance. Implement efficient database structures and queries (SQL/NoSQL) to support high-volume data processing and real-time decision-making capabilities. Utilize Redis for caching frequently accessed data and optimizing query performance, ensuring scalable and responsive application behavior. Identify and resolve performance bottlenecks through code refactoring, query optimization, and system architecture improvements. Conduct thorough unit testing and debugging of applications to ensure reliability, scalability, and compliance with specified requirements. Utilize Git or similar version control systems to manage source code and coordinate with team members on collaborative projects. Experience with Docker/Kubernetes is a must. Experience with cloud service provider - Amazon Web Services (AWS) is must. Experience with AWS Could on any technology (preferred are Kafka, EKS, Kubernetes) Experience with Continuous Integration workflow and tooling. Stay updated with industry trends, emerging technologies, and best practices in .NET development and LLM applications to drive innovation and efficiency within the team. You will have an advantage if you also have: Strong communication skills Experience with cloud service provider like Amazon Web Services (AWS), Google Cloud Engine, Azure or equivalent Cloud provider is a must. Experience with ReactJS. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7443 Reporting into: Sandip Bhattcharjee Role Type: Individual Contributor

Posted 4 days ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Software Engineer – Backend SOL00054 Job Type: Full Time Location: Hyderabad, Telangana Experience Required: 5-7 Years CTC : 13 - 17LPA Job Description : Our client, headquartered in the USA with offices globally is looking for a Backend Software Engineer to join our team responsible for building the core backend infrastructure for our MLOps platform on AWS . The systems you help build will enable feature engineering, model deployment, and model inference at scale – in both batch and online modes. You will collaborate with a distributed cross-functional team to design and build scalable, reliable systems for machine learning workflows. Key Responsibilities: Design, develop, and maintain backend components of the MLOps platform hosted on AWS . Build and enhance RESTful APIs and microservices using Python frameworks like Flask , Django , or FastAPI . Work with WSGI/ASGI web servers such as Gunicorn and Uvicorn . Implement scalable and performant solutions using concurrent programming (AsyncIO) . Develop automated unit and functional tests to ensure code reliability. Collaborate with DevOps engineers to integrate CI/CD pipelines and ensure smooth deployments. Participate in on-call rotation to support production issues and ensure high system availability. Mandatory Skills: · Strong backend development experience using Python with Flask , Django , or FastAPI . · Experience working with WSGI/ASGI web servers (e.g., Gunicorn, Uvicorn). · Hands-on experience with AsyncIO or other asynchronous programming models in Python. · Proficiency with unit and functional testing frameworks . · Experience working with AWS (or at least one public cloud platform). · Familiarity with CI/CD practices and tooling. Nice to have Skills: · Experience developing Kafka client applications in Python. · Familiarity with MLOps platforms like AWS SageMaker , Kubeflow , or MLflow . · Exposure to Apache Spark or similar big data processing frameworks. · Experience with Docker and container platforms such as AWS ECS or EKS . · Familiarity with Terraform , Jenkins , or other DevOps/IaC tools. · Knowledge of Python packaging (Wheel, PEX, Conda). · Experience with metaprogramming in Python. · Education: · Bachelor’s degree in Computer Science, Engineering, or a related field. Show more Show less

Posted 4 days ago

Apply

12.0 years

0 Lacs

India

On-site

Linkedin logo

With Confluent, organisations can harness the full power of continuously flowing data to innovate and win in the modern digital world. We have a purpose that drives us to do better everyday – we're creating an entirely new category within data infrastructure - data streaming. This technology will allow every organisation to create experiences and use the power of data in ways that profoundly impact the way we all live. This impact is our purpose and drives us to do better every day. One Confluent. One team. One Data Streaming Platform. Data Connects Us. About The Role The Kora Background Plane team has the vision to build the best experience for Kora - Confluent’s cloud native Kafka service. As a Staff Software engineer, you will design and build efficient and performant algorithms for right sizing, load balancing and seamless scalability of Kora. You will be instrumental in driving the vision and roadmap providing technical leadership, mentoring, and enabling a high-performing engineering team to tackle complex distributed data challenges at scale. What You Will Do Build the software underpinning mission-critical Kora Background Plane platform. You will play a crucial role in designing, developing and operationalizing high performance, scalable, reliable and resilient systems Collaborate effectively across engineering, product, field teams and other key stakeholders to create and execute impactful roadmap for the Kora Background plane team. Meet and exceed Service Level Agreements (SLAs) for critical cloud services owned by the Kora Background Plane team. Evaluate and enhance the efficiency of our platform's technology stack, keeping pace with industry trends and adopting state-of-the-art solutions Provide technical leadership, mentorship and drive strong teamwork What You Will Bring 12+ years of relevant software development experience 5+ years of experience with designing, building, and scaling distributed systems Deep technical expertise in large scale distributed systems Experience running production services in the cloud with demonstrated operational excellence Proficiency in Java, Scala Ability to influence the team, peers, and management using effective communication and collaborative techniques Proven experience in leading and mentoring technical teams BS, MS or PhD in computer science or a related field, or equivalent work experience What Gives You An Edge A strong background in distributed storage systems or databases Experience in the areas of load balancing stateful distributed systems Experience developing SaaS services on public clouds providers(AWS, Azure or GCP) Interest in evangelism (giving talks at tech conferences, writing blog posts evangelizing Kafka) Come As You Are At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law. Click HERE to review our Candidate Privacy Notice which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees. Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

India

On-site

Linkedin logo

What You’ll Do Designs and delivers high traffic, high performance solutions to provide enriched data back to engineering systems Define the egress strategies and architectures to expose analytical data from the public cloud Collaborate with the stakeholders and engineering partners to define the architecture and implement solutions Act as a subject matter expert for technical guidance, solution design and best practice Develop scalable solutions to implement REST and utilize GraphQL and API Gateways to provide user friendly interfaces Develop streaming data pipelines for custom ingestion, processing and egress to the public cloud Design and implementation of Kafka / PubSub services to publish events adhering to Catalog messaging standards Develop containerized solutions, CI/CD pipelines and utilize orchestration services like Kubernetes Define key metrics, troubleshoot logs using Datadog, APM, Kibana, Grafana, Stackdriver etc Manage and mentor associates; ensure the team is being challenged, exposed to new opportunities, and learning, while still being able to deliver on ambitious goals Develop a technical center of excellence within the analytics organization through training, mentorship, and process innovation Build, lead, and mentor a team of highly talented data professionals working with petabyte scale datasets and rigorous SLAs What You’ll Need A graduate of a computer science, mathematics, engineering, physical science related degree program with 6+ years of relevant industry experience 6+ years of programming experience with at least one language such as Python, Go, Java Script, React JS. Experience in leading design and implementation of medium to large-scale complex projects Experience building high performance, scalable and fault-tolerant services and applications Experience with service oriented architecture (REST & GraphQL) and ability to architect scalable microservices Experience with web frameworks like Django, FastApi, Flask, Spring, Grail, Struts Experience in NoSQL solutions (MongoDB, Hbase/BigTable, Aerospike) and caching technologies (Redis, Memcache) Experience building big data pipelines using cloud-computing technologies preferred Experienced developing in cloud platforms such as Google Cloud Platform (preferred), AWS, Azure, or Snowflake at scale Experience with real-time data streaming tools like Kafka, Kinesis, PubSub, Apache Storm or any similar tools Experience with Docker containers and Kubernetes orchestration Experience in unit testing frameworks, CI/CD implementation (Buildkite preferred) Experience with monitoring and logging tools like Datadog, Grafana, Kibana, Splunk, Stackdriver etc Show more Show less

Posted 4 days ago

Apply

16.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Principal Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Senior Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 16- 20 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills Possess the innate quality to become the go to person for any marketing presales and solution accelerator within the practise. What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 4 days ago

Apply

7.0 - 12.0 years

14 - 24 Lacs

Chennai

Work from Office

Naukri logo

Job Description Bachelors degree in computer science, computer engineering, or related technologies. Seven years of experience in systems engineering within the networking industry. Expertise in Linux deployment, scripting and configuration. Expertise in TCP/IP communications stacks and optimizations Experience with ELK (Elasticsearch, Logstash, Kibana), Grafana data streaming (e.g., Kafka), and software visualization. Experience in analyzing and debugging code defects in the Production Environment. Proficiency in version control systems such as GIT. Ability to design comprehensive test scenarios for systems usability, execute tests, and prepare detailed reports on effectiveness and defects for production teams. Full-cycle Systems Engineering experience covering Requirements capture, architecture, design, development, and system testing. Demonstrated ability to work independently and collaboratively within cross-functional teams. Proficient in installing, configuring, debugging, and interpreting performance analytics to monitor, aggregate, and visualize key performance indicators over time. Proven track record of directly interfacing with customers to address concerns and resolve issues effectively. Strong problem-solving skills, capable of driving resolutions autonomously without senior engineer support. Experience in configuring MySQL and PostgreSQL, including setup of replication, troubleshooting, and performance improvement. Proficiency in networking concepts such as network architecture, protocols (TCP/IP, UDP), routing, VLANs, essential for deploying new system servers effectively. Proficiency in scripting language Shell/Bash, in Linux systems. Proficient in utilizing, modifying, troubleshooting, and updating Python scripts and tools to refine code. Excellent written and verbal communication skills. Ability to document processes, procedures, and system configurations effectively. Ability to Handle Stress and Maintain Quality. This includes resilience to effectively manage stress and pressure, as well as a demonstrated ability to make informed decisions, particularly in high-pressure situations. Excellent written and verbal communication skills. It includes the ability to document processes, procedures, and system configurations effectively. It is required for this role to be on-call 24/7 to address service-affecting issues in production. It is required to work during the business hours of Chicago, aligning with local time for effective coordination and responsiveness to business operations and stakeholders in the region.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Design, develop, test, and deploy scalable and resilient microservices using Java and Spring Boot. Collaborate with cross-functional teams to define, design, and ship new features. Work on the entire software development lifecycle, from concept and design to testing and deployment. Implement and maintain AWS cloud-based solutions, ensuring high performance, security, and scalability. Integrate microservices with Kafka for real-time data streaming and event-driven architecture. Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Should Be a Java Full Stack Developer. Bachelor's or Master's degree in Computer Science or related field. 6+ years of hands-on experience in JAVA FULL STACK - ANGULAR + JAVA SPRING BOOT Proficiency in Spring Boot and other Spring Framework components. Extensive experience in designing and developing RESTful APIs. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

P2-C1-TSTS Development Design, develop, and maintain Java-based microservices. Write clean, efficient, and well-documented code. Collaborate with other developers and stakeholders to define requirements and solutions. Participate in code reviews and contribute to team knowledge sharing. Microservices Architecture Understand and apply microservices principles and best practices. Design and implement RESTful APIs. Experience with containerization technologies (e.g., Docker) and orchestration (e.g., Kubernetes). Knowledge of distributed systems and service discovery. Experience with design patterns (e.g., circuit breaker pattern, proxy pattern). Deep understanding of distributed systems and service discovery. Testing & Quality Develop and execute unit, integration, and performance tests. Ensure code quality and adhere to coding standards. Debug and resolve issues promptly. Deployment & Monitoring Participate in the CI/CD pipeline. Deploy microservices to cloud platforms (e.g., AWS, Azure, GCP). Monitor application performance and identify areas for improvement. Programming Languages Proficiency in Java (J2EE, Spring Boot). Familiarity with other relevant languages (e.g., JavaScript, Python). Microservices Experience designing and developing microservices. Knowledge of RESTful APIs and other communication patterns. Experience with Spring Framework. Experience with containerization (Docker) and orchestration (Kubernetes). Databases Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Familiarity with ORM frameworks (e.g., JPA, Hibernate). Cloud Platforms Experience with at least one cloud platform (e.g., AWS, Azure, GCP). Tools & Technologies Familiarity with CI/CD tools (e.g., Jenkins, Git). Knowledge of logging and monitoring tools (e.g., Splunk, Dynatrace). Experience with messaging brokers (e.g., Kafka, ActiveMQ) Other Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Experience working in Agile/Scrum environments. DevOps Experience with DevOps practices and automation. Show more Show less

Posted 4 days ago

Apply

16.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Principal Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Senior Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 16- 20 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills Possess the innate quality to become the go to person for any marketing presales and solution accelerator within the practise. What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Mysore, Karnataka, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. You'll have the opportunity to work with the latest technologies, ensuring the applications delivered are high performing, highly available, responsive, and maintainable. Your Primary Responsibilities Include Analytical Problem-Solving and Solution Enhancement: Analyze, validate and propose improvements to existing failures, with the support of the architect and technical leader. Comprehensive Engagement Across Process Phases: Involvement in every step of the process, from design, development, testing release changes and troubleshoot where necessary, providing a great customer service. Strategic Stakeholder Engagement and Innovative Coding Solutions: Drive key discussions with your stakeholders and analyze the current landscape for opportunities to operate and code creative solutions. Preferred Education Master's Degree Required Technical And Professional Expertise Work with Hiring Manager to ID up to 5 bullets max Preferred Technical And Professional Experience Experience in Core Java. Hands on experience in Spring & Kafka. Experience working on any relational database. Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Tech Lead – Pre-Sales Location: Delhi NCR, India (Hybrid) Experience: 8+ years Employment Type: Full-time Industry: Technology Consulting & Data Analytics About EXL EXL is a global analytics and digital solutions company that partners with clients to improve business outcomes and unlock growth. We combine deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions. Role Overview We are seeking a dynamic Data Engineer – Tech Lead – Pre-Sales to spearhead our data engineering & GenAI initiatives and play a pivotal role in pre-sales activities. This hybrid role demands a blend of deep technical expertise in data engineering and the acumen to engage with clients, understand their needs, and craft compelling solutions that drive business growth. Additionally, the candidate will be instrumental in establishing thought leadership through newsletters, technical campaigns, and other marketing initiatives. Key Responsibilities Technical Expertise Maintain a strong understanding of data engineering concepts, tools, and best practices. Stay updated with emerging technologies in cloud platforms (AWS, Azure, GCP) and big data ecosystems. Provide technical insights to inform marketing content and client discussions. Leverage AI/ML and GenAI technologies to enhance data solutions and drive innovation. Pre-Sales & Client Engagement Partner with sales teams to identify client needs and propose tailored data and AI solutions. Develop and deliver technical presentations, demonstrations, and proof-of-concepts to prospective clients. Assist in creating proposals, RFP responses, and other sales materials that align with client requirements. Engage in client meetings to understand requirements, address concerns, and build strong relationships. Marketing & Thought Leadership Collaborate with the marketing team to develop go-to-market strategies for data and AI solutions. Author technical blogs, whitepapers, and case studies to establish thought leadership in data engineering and AI domains. Design and lead technical campaigns, webinars, and workshops targeting potential clients and industry stakeholders. Develop newsletters and other marketing collateral that translate complex technical concepts into accessible content for diverse audiences. Engage in market research to identify trends, client needs, and opportunities for EXL's data and AI solutions. Qualifications Education:Bachelor’s or Master’s degree in Engineering, or a related field. Experience:Minimum of 8 years in data engineering, with at least 2 years in a role involving marketing or pre-sales activities. Technical Skills: Proficiency in cloud platforms: one of AWS, Azure, and/or GCP, Azure preferred Strong experience with big data technologies such as Hadoop, Spark, and Kafka. Expertise in SQL and NoSQL databases. Familiarity with ETL tools and data integration techniques. Programming skills in languages like Python. Experience with LLMs & GenAI. Marketing & Communication Skills: Proven experience in creating technical marketing content (blogs, whitepapers, case studies). Develop and implement comprehensive marketing strategies to increase both brand awareness as well as more targeted user acquisition Ability to translate complex technical concepts into accessible content for diverse audiences. Experience in organizing and leading webinars, workshops, or technical campaigns. Ability to communicate complex ideas, anticipates potential objectives and persuades others, often at senior levels, to adopt a different point of view. Experience leading projects with notable risk and complexity. Experience in people management encouraged. Demonstrated excellent oral, presentation, written and interpersonal communication skills. Proven teamwork, thought leadership, and stakeholder collaboration with limited guidance. Demonstrated excellent time management and organizational skills. Ability to work to deadlines under pressure while providing meticulous attention to detail. In-depth conceptual and practical knowledge in product management and go-to-market strategies. Proficiency in Microsoft Office Suite (Excel, PowerPoint, Word, etc.). Preferred Qualifications Experience in the consulting industry or working with cross-functional teams. Familiarity with data governance, security, and compliance standards. Certifications in cloud platforms or data engineering tools. Experience collaborating with marketing teams on technical content creation and campaigns. Experience with AI/ML frameworks Knowledge of AI/ML and GenAI applications in data engineering. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Software Development Engineer III A high-performing individual contributor who acts as a mentor to more junior engineers, applies new engineering principles to improve existing systems, and is responsible for leading complex, well-defined projects. You will join Order Management Service (OMS) , which is the core system at Expedia that supports both pre-booking and post-booking processes. It plays a critical role in multiple ongoing business and technology initiatives aimed at expanding our offerings. OMS leverages a diverse technology stack, including Elasticsearch, Kotlin, AWS Cloud, Spring Boot, Kafka, Apache, and more. With a system availability of 99.99% for Tier 0 and Tier 1 services, OMS is designed for high reliability and performance. What You'll Do Proactively teams up with peers across the organization to build an understanding of cross dependencies and shared problem-solving. Participates in a community of practice to share and gain knowledge. Continually seeks new technical skills in an engineering area. Share new skills and knowledge with the team to increase effectiveness. Demonstrates knowledge of advanced and relevant technology. Is comfortable working with several forms of technology. Understands the relationship between applications, databases, and technology platforms. Develops and tests complex or non-routine software applications and related programs and procedures to ensure they meet design requirements. Effectively applies knowledge of software design principles, data structures and/or design patterns, and computer science fundamentals to write code that is clean, maintainable, optimized, modular with good naming conventions. Effectively applies knowledge of databases and database design principles to solve data requirements. Effectively uses the understanding of software frameworks and how to leverage them to write simpler code. Leads/clarifies code evolution in code reviews. Brings together different stakeholders with varied perspectives to develop solutions to issues and contributes its own suggestions. Thinks holistically to identify opportunities around policies/ processes to increase efficiency across organizational boundaries. Assists with a whole systems approach to analyzing issues by ensuring all components (structure, people, process, and technology) are identified and accounted for. Identifies areas of inefficiency in code or systems operation and offers suggestions for improvements. Compiles and reports on major operational or technical initiatives (like RCAs) to larger groups, whether via written or oral means. Who You Are 5+ years for Bachelor's 3+ years for Master's Developed software in at least 3 different languages. Maintained/ran at least 4 software projects/products in production environments (bug fixing, troubleshooting, monitoring, etc.). Has strength in a couple of languages and/or one language with multiple technology implementations. Identifies strengths and weaknesses among languages for particular use cases. Creates API's to be consumed across the business unit. Selects among the technologies available to implement and solve the need. Understands how projects/teams interact with other teams. Understands and designs moderately complex systems. Tests and monitors code at the project level. Understands testing and monitoring tools. Debug applications. Tests, debugs, and fixes issues within established SLAs. Designs easily testable and observable software. Understands how team goals fit a business need. Identifies business problems at the project level and provides solutions. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Role Summary An established performer who consistently applies software engineering principles to business contexts, leads small, well-defined projects and continues as a supporting player on complex projects, and actively identifies inefficiencies in existing systems. Experience Who you are: 2+ years for Bachelor's 0-2 years for Master's Education Developed software in a team environment of at least 5 engineers (agile, version control, etc.). Built and maintained a software project/product in production environments in public/hybrid cloud infrastructure. Functional/ Technical Skills Has strength in a language, moderate familiarity in other applicable languages. Is familiar with a couple data stores or API access patterns and integration. Has familiarity with associated technologies within their specialization as part of the wider ecosystem. Understands of how projects fit together within their team. Understands moderately complex systems. Tests and monitors their own code. Understands testing and monitoring tools. Debugs applications. Understands how team goals fit a business need. Skills: Java, Kotlin, OOPS. Good to have understanding of ES, Kafka/EQS, database What You'll Do Collaborates with team members to co-develop and solve problems. Proactively reaches out to meet peers across environment and collaborates to solve problems. Takes advantage of opportunities to build new technical expertise in a specific engineering area. Seeks knowledge from subject matter experts when needed. Understands the importance of system and technology integration and the basic features and facilities involved in the integration process. Develops and tests standard software applications and related programs and procedures to ensure they meet design requirements. Applies software design principles, data structures and/or design patterns and computer science fundamentals to write code that is clean, maintainable, optimized, modular with good naming conventions. Applies knowledge of database design to solve data requirements. Helps coordinate stakeholder input and collaboration efforts when developing solutions to issues. Thinks broadly and understand how, why and when policies/processes are standardized and when they differ across the organization. Completes tasks and/or provides data to support implementation of holistic solutions that forge linkages between structure, people, process and technology. Applies formal training methods to current workload. Feels comfortable challenging authority/the status quo. Reports clearly on current work status. Asks challenging questions when empowered to do so. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less

Posted 5 days ago

Apply

2.0 - 6.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Design and Develop Data Flows Integration with Data Sources Data Transformation Error Handling and Monitoring Performance Optimization Collaboration Documentation Security and Compliance Required Candidate profile Apache NiFi and data integration tools ETL concepts Data formats like JSON, XML, and Avro Programming languages such as Java, Python, or Groovy data storage solutions such as Hadoop, Kafka

Posted 5 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

As a Lead Software Engineer – Performance Engineering , you will drive the strategy, design, and execution of performance engineering initiatives across highly distributed systems. You will lead technical efforts to ensure reliability, scalability, and responsiveness of business-critical applications. This role requires deep technical expertise, hands-on performance testing experience, and the ability to mentor engineers while collaborating cross-functionally with architecture, SRE, and development teams. Responsibilities: Define, implement, and enforce SLAs, SLOs, and performance benchmarks for large-scale systems. Lead performance testing initiatives including load, stress, soak, chaos, and scalability testing. Design and build performance testing frameworks integrated into CI/CD pipelines. Analyze application, infrastructure, and database metrics to identify bottlenecks and recommend optimizations. Collaborate with cross-functional teams to influence system architecture and improve end-to-end performance. Guide the implementation of observability strategies using monitoring and APM tools. Optimize cloud infrastructure (e.g., autoscaling, caching, network tuning) for cost-efficiency and speed. Tune databases and messaging systems (e.g., PostgreSQL, Kafka, Redis) for high throughput and low latency. Mentor engineers and foster a performance-first culture across teams. Lead incident response and postmortem processes related to performance issues. Drive continuous improvement initiatives using data-driven insights and operational feedback. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 8+ years of experience in software/performance engineering, with 2+ years in a technical leadership role. Expertise in performance testing tools such as JMeter, k6, Gatling, or Locust. Strong knowledge of distributed systems, cloud-native architecture, and microservices. Proficiency in scripting and automation using Python, Go, or Shell. Experience with observability and APM tools (e.g., Datadog, Prometheus, New Relic, AppDynamics). Deep understanding of SQL performance, caching strategies, and tuning for systems like PostgreSQL and Redis. Familiarity with CI/CD pipelines, container orchestration, and IaC tools (e.g., Kubernetes, Terraform). Strong communication skills and experience mentoring and leading technical teams. Ability to work cross-functionally and make informed decisions in high-scale, production environments. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At Tarana, you will help build a cutting-edge cloud product -- a management system for wireless networks, scaling to millions of devices -- using modern cloud-native architecture and open-source technologies. You will be responsible for designing and implementing distributed software in a microservices architecture. This could include everything from requirements gathering (working with Product Management and customers) to high-level design to implementation, integrations, operations, troubleshooting, performance tuning and scaling. You will work as a key member of an R&D team that owns one or more services, end-to-end. There will be PoCs, customer pilots, and production releases, all in an agile engineering environment. Expect to be challenged and stretch your skills on a daily basis. Expect to meet or beat exacting standards of quality and performance. We will provide the right mentoring to make sure that you can succeed. The job is based in Pune, and this job profile will require in-person presence in the office. Required Skills & Experience: Bachelor’s degree (or higher) in Computer Science or a closely-related field, from a reputed university (Tier1/Tier2) At least 10+ years of experience in backend software development, in product companies or tech startups Experience with building SaaS/IoT product offerings will be a plus Software development in Java and its associated ecosystem (e.g., Spring Boot, Hibernate, etc.) Microservices and RESTful APIs: implementation and consumption Conceptual knowledge of distributed systems -- clustering, asynchronous messaging, streaming, scalability & performance, data consistency, high availability, etc. -- would be a big plus Good understanding of databases (relational, NoSQL) and caching. Experience on any time series database will be a plus. Experience with distributed messaging systems like Kafka/confluent or kinesis or google pub/sub would be a plus Experience with cloud-native platforms like Kubernetes will be a big plus Working knowledge of network protocols (TCP/IP, HTTP) and standard network architectures, RPC mechanisms (e.g., gRPC) Since our founding in 2009, we’ve been on a mission to accelerate the pace of bringing fast and affordable internet access — and all the benefits it provides — to the 90% of the world’s households who can’t get it. Through a decade of R&D and more than $400M of investment, we’ve created an entirely unique next-generation fixed wireless access technology, powering our first commercial platform, Gigabit 1 (G1). It delivers a game-changing advance in broadband economics in both mainstream and underserved markets, using either licensed or unlicensed spectrum. G1 started production in mid 2021 and has now been installed by over 160 service providers globally. We’re headquartered in Milpitas, California, with additional research and development in Pune, India. G1 has been developed by an incredibly talented and pioneering core technical team. We are looking for more world-class problem solvers who can carry on our tradition of customer obsession and ground-breaking innovation. We’re well funded, growing incredibly quickly, maintaining a superb results-focused culture while we’re at it, and all grooving on the positive difference we are making for people all over the planet. If you want to help make a real difference in this world, apply now! Show more Show less

Posted 5 days ago

Apply

Exploring Kafka Jobs in India

Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Gurgaon

These cities are known for their thriving tech industries and have a high demand for Kafka professionals.

Average Salary Range

The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.

Career Path

Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.

Related Skills

In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture

Interview Questions

  • What is Apache Kafka and how does it differ from other messaging systems? (basic)
  • Explain the role of Zookeeper in Apache Kafka. (medium)
  • How does Kafka guarantee fault tolerance? (medium)
  • What are the key components of a Kafka cluster? (basic)
  • Describe the process of message publishing and consuming in Kafka. (medium)
  • How can you achieve exactly-once message processing in Kafka? (advanced)
  • What is the role of Kafka Connect in Kafka ecosystem? (medium)
  • Explain the concept of partitions in Kafka. (basic)
  • How does Kafka handle consumer offsets? (medium)
  • What is the role of a Kafka Producer API? (basic)
  • How does Kafka ensure high availability and durability of data? (medium)
  • Explain the concept of consumer groups in Kafka. (basic)
  • How can you monitor Kafka performance and throughput? (medium)
  • What is the purpose of Kafka Streams API? (medium)
  • Describe the use cases where Kafka is not a suitable solution. (advanced)
  • How does Kafka handle data retention and cleanup policies? (medium)
  • Explain the Kafka message delivery semantics. (medium)
  • What are the different security features available in Kafka? (medium)
  • How can you optimize Kafka for high throughput and low latency? (advanced)
  • Describe the role of a Kafka Broker in a Kafka cluster. (basic)
  • How does Kafka handle data replication across brokers? (medium)
  • Explain the significance of serialization and deserialization in Kafka. (basic)
  • What are the common challenges faced while working with Kafka? (medium)
  • How can you scale Kafka to handle increased data loads? (advanced)

Closing Remark

As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies