Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
π§Ύ Job Title: Application Developer β Data Engineering π Experience: 4β6 Years π Notice Period: Immediate to 20 Days π Job Summary: We are looking for a highly skilled Data Engineering Application Developer to join our dynamic team. You will be responsible for the design, development, and configuration of data-driven applications that align with key business processes. Your role will also include refining data workflows, optimize performance, and supporting business goals through scalable and reliable data solutions. π Roles & Responsibilities: Independently develop and maintain data pipelines and ETL processes. Become a Subject Matter Expert (SME) in Data Engineering tools and practices. Collaborate with cross-functional teams to gather requirements and provide data-driven solutions. Actively participate in team discussions and contribute to problem-solving efforts. Create and maintain comprehensive technical documentation, including application specifications and user guides. Stay updated with industry best practices and continuously improve application and data processing performance. π οΈ Professional & Technical Skills: β Must-Have Skills: Proficiency in Data Engineering , PySpark , and Python Strong knowledge of ETL processes and data modeling Experience working with cloud platforms like AWS or Azure Hands-on expertise with SQL or NoSQL databases Familiarity with other programming languages such as Java β Good-to-Have Skills: Knowledge of Big Data tools and frameworks (e.g., Hadoop, Hive, Kafka) Experience with CI/CD tools and DevOps practices Exposure to containerization tools like Docker or Kubernetes #DataEngineering #PySpark #PythonDeveloper #ETLDeveloper #BigDataJobs #DataEngineer #BangaloreJobs #PANIndiaJobs #AWS #Azure #SQL #NoSQL #CloudDeveloper #ImmediateJoiners #DataPipeline #Java #Kubernetes #SoftwareJobs #ITJobs #NowHiring #HiringAlert #ApplicationDeveloper #DataJobs #ITCareers #JoinOurTeam #TechJobsIndia #JobOpening #FullTimeJobs Show more Show less
Posted 3 days ago
4.0 - 5.11 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position Title: Sr. Java Developer Web Openings Experience Level 4 to 5.11 Years - 1 Developer 6 to 7.11 Years - 1 Developer The Position is at BFSI Domain Client based out of Kanjurmarg (Mumbai). The Client is a market leader in its Domain. Selected Candidates with be working on Cutting edge Technologies as Client is looking for Dynamic, Hardworking Committed Candidates. Qualification B.E/ B. Tech /MTech /MCA. Key Responsibilities Developing, releasing, and supporting java based multi-tier robust web Application and standalone systems. Deliver across the entire app life cycle, design, build, deploy, test, release and support. Working directly with developers and product managers to conceptualize, build, test and realise products. Work on bug fixing and improving application performance in coordination with QA team Continuously discover, evaluate, and implement new technologies to maximize development efficiency Optimizing performance for the apps and keep up to date on the latest industry trends in the emerging technologies. Required Skills Candidate should have experience in developing applications using JAVA/J2EE programming skills with sound understanding of Java 8-17. Strong proficiency in back-end language (Java), Java frameworks (Spring Boot, Spring MVC) and JavaScript frameworks (Angular, AngularJS), Kafka. Strong JS skills on jQuery, HTML and CSS, Strong understanding and experience with Microservices Experience working with RDBMS concepts, SQL syntaxes and complex query processing and optimization (e.g. PostgreSQL, Oracle) Exposure to handling and configuring Web servers (e.g. Apache) and UI/UX design. Strong understanding of object-oriented programming (OOP) concepts and design patterns Experience in web services and clear understanding of RESTful APIs to connect to back-end services. Excellent problem-solving skills, with the ability to debug and troubleshoot code issues Strong communication and teamwork skills, with the ability to work collaboratively with cross functional team Selection Procedure Face to Face round of interview at Greysoft office. Virtual round of interview by Client. Machine Test. (Client Location) Joining Period Immediate to 15 days. Interested candidate can email their updated resume on recruiter@greysoft.in This job is provided by Shine.com Show more Show less
Posted 3 days ago
4.0 - 5.11 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position Title: Sr. Java Developer Core Experience Level 4 to 5.11 Years 1 6 to 7.11 Years 1 The Position is at BFSI Domain Client based out of Kanjurmarg (Mumbai). The Client is a market leader in its Domain. Selected Candidates with be working on Cutting edge Technologies as Client is looking for Dynamic, Hardworking Committed Candidates. Qualification B.E/ B. Tech /MTech /MCA. Key Responsibilities Conceptualizing, Developing, releasing, and supporting java based multi-tier robust Web Application and standalone systems. Deliver across the entire app life cycle, design, build, deploy, test, release and support. Optimizing performance for the Systems and Continuously discover, evaluate, and implement emerging technologies to maximize development efficiency Working directly with developers and product managers to conceptualize, build, test and realise products. Work on bug fixing and improving application performance in coordination with QA team. Required Skills Strong knowledge of Java 8 -17 including Collections framework and data structures, multithreading and concurrency management, memory management, Kafka, request queuing, NIO, IO, TCP/IP, file system. Candidate should have experience in developing applications using JAVA/J2EE programming skills preferably with Real Time Response Systems. Strong proficiency in back-end language (Java), Java frameworks (Spring Boot, Spring MVC) Strong understanding and experience with Microservices Experience working with RDBMS concepts, SQL syntaxes and complex query processing and optimization (e.g. PostgreSQL, Oracle), in memory databases such as Redis, Memcache. Exposure to handling and configuring Web servers (e.g. Apache) and UI/UX design. Strong understanding of object-oriented programming (OOP) concepts and design patterns Excellent problem-solving skills, with the ability to debug and troubleshoot code issues Strong communication and teamwork skills, with the ability to work collaboratively with cross functional teams. Selection Procedure Face to Face round of interview at Greysoft office. Virtual round of interview by Client. Machine Test. (Client Location) Joining Period Immediate to 15 days. Interested candidate can email their updated resume on recruiter@greysoft.in This job is provided by Shine.com Show more Show less
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are looking for an experienced Data Engineer having experience in building large-scale data pipelines and data lake ecosystems. Our daily work is around solving interesting and exciting problems against high engineering standards. Even though you will be a part of the backend team, you will be working with cross-functional teams across the org. This role demands good hands-on different programming languages, especially Python, and the knowledge of technologies like Kafka, AWS Glue, Cloudformation, ECS, etc. You will be spending most of your time on facilitating seamless streaming, tracking, and collaborating huge data sets. This is a back-end role, but not limited to it. You will be working closely with producers and consumers of the data and build optimal solutions for the organization. Will appreciate a person with lots of patience and data understanding. Also, we believe in extreme ownership! Design and build systems to efficiently move data across multiple systems and make it available for various teams like Data Science, Data Analytics, and Product. Design, construct, test, and maintain data management systems. Understand data and business metrics required by the product and architect the systems to make that data available in a usable/queryable manner. Ensure that all systems meet the business/company requirements as well as best industry practices. Keep ourselves abreast of new technologies in our domain. Recommend different ways to constantly improve data reliability and quality. Bachelors/Masters, Preferably in Computer Science or a related technical field. 2-5 years of relevant experience. Deep knowledge and working experience of Kafka ecosystem. Good programming experience, preferably in Python, Java, Go, and a willingness to learn more. Experience in working with large sets of data platforms. Strong knowledge of microservices, data warehouse, and data lake systems in the cloud, especially AWS Redshift, S3, and Glue. Strong hands-on experience in writing complex and efficient ETL jobs. Experience in version management systems (preferably with Git). Strong analytical thinking and communication. Passion for finding and sharing best practices and driving discipline for superior data quality and integrity. Intellectual curiosity to find new and unusual ways of how to solve data management issues. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a highly experienced AWS Data Solution Architect to lead the design and implementation of scalable, secure, and high-performance data architectures on the AWS cloud. The ideal candidate will have a deep understanding of cloud-based data platforms, analytics, and best practices for optimizing data pipelines and storage. You will work closely with data engineers, business stakeholders, and cloud architects to deliver robust data solutions. Key Responsibilities: 1. Architecture Design and Planning: Design scalable and resilient data architectures on AWS that include data lakes, data warehouses, and real-time processing. Architect end-to-end data solutions leveraging AWS services such as S3, Redshift, RDS, DynamoDB, Glue, and Lake Formation. Develop multi-layered security frameworks for data protection and governance. 2. Data Pipeline Development: Build and optimize ETL/ELT pipelines using AWS Glue, Data Pipeline, and Lambda. Integrate data from various sources like RDBMS, NoSQL, APIs, and streaming platforms. Ensure high availability and real-time processing capabilities for mission-critical applications. 3. Data Warehousing and Analytics: Design and optimize data warehouses using Amazon Redshift or Snowflake. Implement data modeling, partitioning, and indexing for optimal performance. Create analytical models to drive business insights and data-driven decision-making. 4. Real-time Data Processing: Implement real-time data processing using AWS Kinesis, Kafka, or MSK. Architect solutions for event-driven architectures with Lambda and EventBridge. 5. Security and Compliance: Implement best practices for data security, encryption, and access control using IAM, KMS, and Lake Formation. Ensure compliance with regulatory standards like GDPR, HIPAA, and CCPA. 6. Monitoring and Optimization: Monitor performance, optimize costs, and enhance the reliability of data pipelines and storage. Set up observability with AWS CloudWatch, X-Ray, and CloudTrail. Troubleshoot issues and ensure business continuity with automated recovery mechanisms. 7. Documentation and Best Practices: Create detailed architecture diagrams, data flow mappings, and documentation for reference. Establish best practices for data governance, architecture design, and deployment. 8. Collaboration and Leadership: Work closely with data engineers, application developers, and DevOps teams to ensure seamless integration. Act as a technical advisor to business stakeholders for cloud-based data solution Regulatory Compliance Reporting Experience The architect should be able to resolve complex challenges due to the strict regulatory environment in India and the need to balance compliance with operational efficiency. Key complexities include: a) Building data segregation and Access Control capability: This requires in-depth understanding of data privacy laws, Amazonβs global data architecture, and the ability to design systems that can segregate and control access to sensitive payment data without compromising functionality. b) Integrating diverse data sources into Secure Redshift Cluster (SRC) data which involves working with multiple teams and systems, each with its own data structure and transfer protocols. c) Instrumenting additional UPI data elements collaborating with UPI tech teams and a deep understanding of UPI transaction flows to ensure accurate and compliant data capture. d) Automating Law Enforcement Agency (LEA) and Financial Intelligence Unit (FIU) reporting: This involves creating secure, automated pipelines for highly sensitive data, ensuring accuracy and timeliness while meeting strict regulatory requirements. The Architect will be extending from India-specific solutions to serving worldwide markets. Complexities include: a) Designing a unified data storage and compute architecture requiring harmonizing diverse tech stacks and data logging practices across multiple countries while considering data sovereignty laws and cost implications of cross-border data transfers. b) Setting up comprehensive datamarts covering metrics and dimensions involving standardizing metric definitions across markets, ensuring data consistency, and designing for scalability to accommodate future growth. c) Enabling customer segmentation across power-up programs that requires integrating data from diverse programs while maintaining data integrity and respecting country-specific data usage regulations. d) Managing time zone challenges :Synchronizing data across multiple time zones requires innovative solutions to ensure timely data availability without compromising completeness or accuracy. e) Navigating regulatory complexities: Designing systems that comply with varying and evolving data regulations across multiple countries while maintaining operational efficiency and flexibility for future changes. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Description: Your team You'll be joining Security Services team which is looking after a global technical control and operates a set of in-house developed tooling. As an IT quality & test automation engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities: Your role Do you know how to provide applications that exceed expectations? Are you a crafty problem solver? We are seeking a talented and experienced Full Stack Developer with expertise in Node.js, React, PostgreSQL, DevOps, and Azure App Service to join our dynamic development team. The ideal candidate will be responsible for designing, developing, and maintaining web applications and widgets that deliver high performance and responsiveness. You will work on both the front-end and back-end components of our applications, ensuring seamless integration and functionality. Additionally, you will manage our deployment processes and maintain our applications on Azure App Service. We're looking for someone like that who can: β’ Design and implement microservices architectural. β’ Develop and maintain server-side logic using Node.js. β’ Design and implement front-end components using React. β’ Build and optimize distributed event streaming in Kafka. β’ Build and optimize database queries and structures in PostgreSQL. β’ Collaborate with cross-functional teams to define, design, and ship new features. β’ Ensure the performance, quality, and responsiveness of applications. β’ Identify and correct bottlenecks and fix bugs. β’ Help maintain code quality, organization, and automation. β’ Implement and manage CI/CD pipelines to automate deployment processes. β’ Deploy, monitor, and maintain applications on Azure App Service. β’ Manage and configure cloud infrastructure, ensuring scalability and reliability. β’ Develop scripts and automation tools to improve operational efficiency. β’ Debug and design scripts (Bash/PowerShell) related to development and deployment of applications β’ Manage code repositories via GitLab and collaborate with other engineers/managers on version control Mandatory Skills: β’ Professional Experience: 5+ years of experience in full stack development. β’ Front-end: Proficient in React.js and state management libraries (e.g., Redux). Strong understanding of HTML5, CSS3, and JavaScript (ES6+). Experience with front-end build tools and code versioning tools (GitLab, Vscode). Strong understanding of web-based security protocols, and how to remediate client-side vulnerabilities β’ Back-end: Strong proficiency with Node.js and Express.js. Experience with RESTful API design and implementation. Intermediate to advanced knowledge of nodeJS based application infrastructure on both Windows and Linux OS. (proficiency in Bash and PowerShell). In-depth understanding of authentication flow, primarily OAuth 2.0, especially based around Azure Active Directory. Strong understanding of server-based security protocols, and how to remediate server-side vulnerabilities Some exposure to/experience with middleware Database: Proficient in PostgreSQL, including schema design, query optimization, and performance tuning. β’ DevOps: Experience with CI/CD pipelines and tools (e.g., Jenkins, GitLab Actions). Proficient in containerization and orchestration tools (e.g., Docker, Kubernetes, OpenShift). β’ Cloud Services: Hands-on experience with Azure App Service. Familiarity with other Azure services and tools. β’ Testing: Experience with automated testing frameworks (Squash). Nice-to-Have Skills: Soft Skills: Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Ability to work in an Agile/Scrum environment. Languages: English: C1 Advanced Show more Show less
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title Manager Development Job Description Common accountabilities: Works with a high level of autonomy. Leads projects, contributes to broad cross-functional projects. Proficient in technical knowledge to ensure team performs at a high level. Is recognized as a leader in own area and may formally train Specialists/Senior Specialists. Accountable for the budget, performance and results of a medium-sized team or multiple teams of employees. Influences the resource, budget and policy planning and sets concrete development plans for the team members. Understands how main business drivers may impact on own area. Can assess complex problems with broad impact on the activity, improve processes, recommend solutions and risk mitigation plans. Able to communicate complex information. Has a mid/long-term vision of the activity and the business, influences the department's strategy based on a broad understanding of the environment. Exposed to complex decision making. Specific Accountabilities Accountability / Business acumen Define and share the technical/functional team roadmap and vision according to the department/division/company objectives Regular Report on team activities Be accountable for the performance and results of a unit within own discipline or function Develop plans and priorities to address resource and operational challenges Suggest alternatives / improvements / new techniques in processes, flows, operational models and plans Moderate budgetary impact on business Technical Excellence Producing code of high quality with high efficiency Work in each step of the product development cycle including creating technical requirements, leading complex feasibility studies, project planning, identifying dependencies and improvements Investigate, analyze & give recommendations on the root causes of complex software and system defects Solving problems / troubleshooting in a timely manner & with a high level of engagement Foster applications that are easy to monitor and operate, to improve infrastructure availability Apply best practices on code quality and security-safe code through non-functional requirements mastery, code reviews, coding guidelines, unit testing and code refactoring Technological Acumen β cross disciplinary knowledge (ex. UI, networkβ¦) Deliver solutions by leveraging cloud platforms such as AWS, GCP, Azure, or PCF. Design and integrate messaging platforms, including RabbitMQ, Kafka, cloud messaging, and enterprise messaging. Apply distributed caching solutions like Redis, Memcache, etc., to optimize application performance. Specific Skills Develop and maintain applications using advanced Java (Java 8 and above), including concurrency, multithreaded models, blocking/non-blocking IO, lambdas, streams, generics, and complex algorithms and data structures. Execute database operations (DDL, DML), model database structures, manage transactional scenarios, and implement Isolation levels for both SQL and NoSQL databases. Utilize frameworks such as SpringBoot, Spring Cloud, or Quarkus to deliver scalable, complex solutions. Lead development of API-based digital journeys and implement alternatives such as DBT to achieve target outcomes. Utilize build and automation tools, code quality plugins, CI/CD pipelines, and containerization platforms like Docker, Podman, and Kubernetes. Employ logging and monitoring solutions like Splunk, ELK Stack, Grafana, etc., to track technical KPIs and ensure system reliability. Apply application profiling tools such as jProfiler, Yourkit, and Visual VM to optimize application performance. Continuous Improvement Suggest evolution of the component and make recommendations on how to manage the debt of the code / clean up the code Improve the technical aspect of the project delivery and provide recommendations for engineering process improvement Building cross-functional/technical teams & knowledge sharing Work easily with others / ensure smooth communication Consistently create successful engagement on projects and collaborate with cross-functional teams in driving operating and service excellence Attending to PE community events Contribute to the R&D cultural transformation and talent development through team members (junior members) technical coaching and mentoring (give an answer, provide feedback) β reactive is mandatory Delivering trainings that have been defined β optional Being a Developer advocate (depending on area of influence, going to hackathon, recruitment days, conference, participating to Opensource) Communication Ensure timely and appropriate communication to team members regarding company/organization information Collaborate with other software development, architecture, solutions, and QA teams to ensure that software systems are designed for testability, stability, scalability, and performance. Diversity & Inclusion Amadeus aspires to be a leader in Diversity, Equity and Inclusion in the tech industry, enabling every employee to reach their full potential by fostering a culture of belonging and fair treatment, attracting the best talent from all backgrounds, and as a role model for an inclusive employee experience. Amadeus is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to gender, race, ethnicity, sexual orientation, age, beliefs, disability or any other characteristics protected by law. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customersβ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the companyβs products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TVβ’ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What you'll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less
Posted 3 days ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role overview As a Backend Software Engineer, you will be responsible for designing, developing, and maintaining server-side applications. You will collaborate with cross-functional teams to ensure seamless integration of various components and deliver high-performance, scalable solutions. Required Skills and Qualifications: -> Bachelorβs degree in Computer Science, Engineering, or a related field. -> 5 to 8 years of experience in backend development. -> Strong experience in Java 8 or above versions. -> Strong experience in Kafka -> Experience with database technologies like SQL, MySQL, PostgreSQL, or MongoDB. -> Strong understanding of RESTful APIs and microservices architecture. -> Familiarity with version control systems, preferably Git. -> Knowledge of cloud platforms (AWS, Azure, or Google Cloud) and CI/CD pipelines. -> Excellent problem-solving skills and attention to detail. -> Strong communication and teamwork skills. Good to have Skills: -> Experience with containerization technologies like Docker and orchestration tools like Kubernetes. -> Knowledge of serverless architecture. -> Familiarity with Agile methodologies and practices. What would you do here? Key Responsibilities: -> Design, develop, and maintain robust backend systems and APIs. -> Collaborate with front-end developers, product managers, and other stakeholders to understand requirements and deliver effective solutions. -> Write clean, maintainable, and efficient code following best practices and coding standards. -> Conduct thorough testing and debugging to ensure high-quality software. -> Participate in code reviews to uphold code quality and share knowledge. -> Stay current with emerging backend technologies and methodologies, incorporating them as appropriate. -> Troubleshoot and resolve backend-related issues. Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Profile Description Weβre seeking someone to join our team as Senior Java Developer with good experience on UI/UX technologies to join a technologically advanced team. You must have expertise in leading design and development of multi-tiered Java EE-style applications. You should be fluent in spring, databases, and database interface layers and associated Java development tools. WM_Technology Wealth Management Technology is responsible for the design, development, delivery, and support of the technical solutions behind the products and services used by the Morgan Stanley Wealth Management Business. Practice areas include: Analytics, Intelligence, & Data Technology (AIDT), Client Platforms, Core Technology Services (CTS), Financial Advisor Platforms, Global Banking Technology (GBT), Investment Solutions Technology (IST), Institutional Wealth and Corporate Solutions Technology (IWCST), Technology Delivery Management (TDM), User Experience (UX), and the CAO team. WM Product Technology Wealth Management Product Technology (WMPT) is a dynamic and fast-paced area within the Firmβs WM Technology Division. We are responsible for creating innovative technology solutions for the Private Banking Group (PBG), one of the strategic growth areas of the Firm, providing cash management and lending products and services to our WM clients. This includes state-of-the-art technology for a nationwide network of Private Bankers and product specialists who work with Financial Advisors to provide access to products and services such as online banking, cards, deposit products, residential mortgages, securities-based loans, and tailored lending. If you are an exceptional individual who is interested in solving complex problems and building sophisticated solutions in a dynamic team environment, WMPT is the place for you. Software Engineering This is Associate position that develops and maintains software solutions that support business needs. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firmβs global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firmβs infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, thereβs ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team thatβs eager to create, innovate and make an impact on the world? Read onβ¦ What Youβll Do In The Role Building enterprise server-side applications using Java EE Technologies, hands-on development using Java technology Hands on development of Java applications as well as multi-tier Java EE style applications in the finance technology supporting all document platform business lines. Collaborating with multiple technology teams that are upstream and downstream to the application which include Document Platform co-partners and firmβs GL system etc. Working in the Agile development methodologies, collaborating with business and technology teams located globally. What Youβll Bring To The Role At least 4 years of hands-on experience as Java Developer building enterprise level applications using Core Java 1.8 or higher. 4+ years of hands-on experience in Java EE design and programming, solid understanding of multi-tiered web-based applications. In-depth knowledge of JavaScript, Angular, jQuery, and CSS. Practical experience with Microservices Framework like Spring Boot, Event driven Services, Cloud Native applications development. Practical experience with Web API, Junit/TDD, KAFKA, GIT, and Team City. Strong knowledge of CI CD Pipelines, Code quality analysis tools like Sonar, find bug. Strong understanding of database analysis & design including SQL, indexes, and query tuning. Ability to analyze the business requirement and define the appropriate design with respect to data modeling, configuration, and customization of applications. Practical experience with data model designing and modeling tools. Proven experience working in agile development methodologies. Excellent verbal and written communication skills. Skills Desired Working Knowledge of building applications in the Cloud. Working Knowledge of Unix/Linux and/or any scripting language. Exposure to JIRA or other ALM tools to create a productive, high quality development environment. Working knowledge of financial markets, lending-based products, and Wealth Management What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 85 years. At our foundation are five core values β putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back β that guide our more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, youβll find trusted colleagues, committed mentors and a culture that values diverse perspectives, individual intellect and cross-collaboration. Our Firm is differentiated by the caliber of our diverse team, while our company culture and commitment to inclusion define our legacy and shape our future, helping to strengthen our business and bring value to clients around the world. Learn more about how we put this commitment to action: morganstanley.com/diversity. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - arenβt just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, youβll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. Thereβs also ample opportunity to move about the business for those who show passion and grit in their work. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings! One of our esteemed client Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan. The company acquired Italy -based Value Team S.p.A. and launched Global One Teams. Join this dynamic, high-impact firm where innovation meets opportunity β and take your career to new height s! π We Are Hiring: Informatica Administrator (5-10 years) Note - We need pure Informatica Admin, No Developer profiles required. Shift Timings: 9am to 6pm Rel Exp β 5+ years Work Location and Address: Hi-tech City Layout, Madhapur, Hyderabad - 500 081 Interview process - 2 Round (1 round of in-person is a MUST) Mandate skills - Informatica Administration in MDM-E360/PIM-P360 Oracle DB Unix Kafka configuration is an addon. JD - To install, configure, manage, and support Informatica MDM and PIM platforms, ensuring high availability, performance, and data integrity for enterprise-level master and product data domains. Installation & Configuration Install and configure Informatica MDM (Hub, IDD, E360) and PIM (Informatica Product 360). Set up application tiers including database, application server (WebLogic/JBoss/Tomcat), and web server. Configure integration points with source/target. Experience in upgradation of PC, IDQ, MDM/E360, PIM/P360 to higher versions. Experience in migrating PC, IDQ, MDM/E360, PIM/P360 objects and good at trouble shooting the performance bottle necks. Interested candidates, please share your updated resume along with the following details : Total Experience: Relevant Experience in Informatica Admin: Current Loc Current CTC: Expected CTC: Notice Period: π We assure you that your profile will be handled with strict confidentiality. π© Apply now and be part of this incredible journey Thanks, Syed Mohammad!! syed.m@anlage.co.in Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Senior .NET Engineer Experience: 5-12 Years Location: Hyderabad This is a WFO (Work from Office) role. Mandatory Skills: Dot Net Core, C#, Kafka, CI/CD pipelines, Observability tools, Orchestration tools, Cloud Microservices Interview Process First round - Online test Second round - Virtual technical discussion Manager/HR round - Virtual discussion Required Qualification Company Overview It is a globally recognized leader in the fintech industry, delivering cutting-edge trading solutions for professional traders worldwide. With over 15 years of excellence, a robust international presence, and a team of over 300+ skilled professionals, we continually push the boundaries of technology to remain at the forefront of financial innovation. Committed to fostering a collaborative and dynamic environment, our prioritizes technical excellence, innovation, and continuous growth for our team. Join our agile-based team to contribute to the development of advanced trading platforms in a rapidly evolving industry. Position Overview We are seeking a highly skilled Senior .NET Engineer to play a pivotal role in the design, development, and optimization of highly scalable and performant domain-driven microservices for our real-time trading applications. This role demands advanced expertise in multi-threaded environments, asynchronous programming, and modern software design patterns such as Clean Architecture and Vertical Slice Architecture. As part of an Agile Squad, you will collaborate with cross-functional teams to deliver robust, secure, and efficient systems, adhering to the highest standards of quality, performance, and reliability. This position is ideal for engineers who excel in building low-latency, high-concurrency systems and have a passion for advancing fintech solutions. Key Responsibilities System Design and Development Architect and develop real-time, domain-driven microservices using .NET Core to ensure scalability, modularity, and performance. Leverage multi-threaded programming techniques and asynchronous programming paradigms to build systems optimized for high-concurrency workloads. Implement event-driven architectures to enable seamless communication between distributed services, leveraging tools such as Kafka or AWS SQS. System Performance and Optimization Optimize applications for low-latency and high-throughput in trading environments, addressing challenges related to thread safety, resource contention, and parallelism. Design fault-tolerant systems capable of handling large-scale data streams and real-time events. Proactively monitor and resolve performance bottlenecks using advanced observability tools and techniques. Architectural Contributions Contribute to the design and implementation of scalable, maintainable architectures, including Clean Architecture, Vertical Slice Architecture, and CQRS. Collaborate with architects and stakeholders to align technical solutions with business requirements, particularly for trading and financial systems. Employ advanced design patterns to ensure robustness, fault isolation, and adaptability. Agile Collaboration Participate actively in Agile practices, including Scrum ceremonies such as sprint planning, daily stand-ups and retrospectives.. Collaborate with Product Owners and Scrum Masters to refine technical requirements and deliver high-quality, production-ready software. Code Quality and Testing Write maintainable, testable, and efficient code adhering to test-driven development (TDD) methodologies. Conduct detailed code reviews, ensuring adherence to best practices in software engineering, coding standards, and system architecture. Develop and maintain robust unit, integration, and performance tests to uphold system reliability and resilience. Monitoring and Observability Integrate Open Telemetry to enhance system observability, enabling distributed tracing, metrics collection, and log aggregation. Collaborate with DevOps teams to implement real-time monitoring dashboards using tools such as Prometheus, Grafana, and Elastic (Kibana). Ensure systems are fully observable, with actionable insights into performance and reliability metrics. Required Expertise- T echnical Expertise and Skills: : 5+ years of experience in software development, with a strong focus on .NET Core and C#. Deep expertise in multi-threaded programming, asynchronous programming, and handling concurrency in distributed systems. Extensive experience in designing and implementing domain-driven microservices with advanced architectural patterns like Clean Architecture or Vertical Slice Architecture. Strong understanding of event-driven systems, with knowledge of messaging frameworks such as Kafka, AWS SQS, or RabbitMQ. Proficiency in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana). Hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes. Expertise in Agile methodologies under Scrum practices. Solid knowledge of Git and version control best practices. Beneficial Skills Familiarity with Saga patterns for managing distributed transactions. Experience in trading or financial systems, particularly with low-latency, high-concurrency environments. Advanced database optimization skills for relational databases such as SQL Server. Certifications And Education Bachelorβs or Masterβs degree in Computer Science, Software Engineering, or a related field. Relevant certifications in software development, system architecture, or AWS technologies are advantageous. Why Join? Exceptional team building and corporate celebrations Be part of a high-growth, fast-paced fintech environment. Flexible working arrangements and supportive culture. Opportunities to lead innovation in the online trading space. Skills: ci/cd pipelines,prometheus,orchestration tools,dot net core,grafana,git,cloud microservices,aws sqs,agile methodologies,.net core,vertical slice architecture,ci/cd pipeline,kafka,clean architecture,test-driven development (tdd),observability tools,.net,asynchronous programming,c#,elastic (kibana),event-driven architectures,multi-threaded programming Show more Show less
Posted 3 days ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Demandbase: Demandbase is the Smarter GTMβ’ company for B2B brands. We help marketing and sales teams overcome the disruptive data and technology fragmentation that inhibits insight and forces them to spam their prospects. We do this by injecting Account Intelligence into every step of the buyer journey, wherever our clients interact with customers, and by helping them orchestrate every action across systems and channels - through advertising, account-based experience, and sales motions. The result? You spot opportunities earlier, engage with them more intelligently, and close deals faster. As a company, weβre as committed to growing careers as we are to building world-class technology. We invest heavily in people, our culture, and the community around us. We have offices in the San Francisco Bay Area, New York, Seattle, and teams in the UK and India, and allow employees to work remotely. We have also been continuously recognized as one of the best places to work in the San Francisco Bay Area. We're committed to attracting, developing, retaining, and promoting a diverse workforce. By ensuring that every Demandbase employee is able to bring a diversity of talents to work, we're increasingly capable of living out our mission to transform how B2B goes to market. We encourage people from historically underrepresented backgrounds and all walks of life to apply. Come grow with us at Demandbase! What you'll be doing: This job is for a responsible individual contributor with a primary duty of leading the development effort and building scalable distributed systems. Design & develop scalable data processing platforms. Work on developing scalable data architecture system. It provides the opportunity and flexibility to own a problem space and drive its product road map. With ample opportunities to learn and explore, a highly motivated and committed engineer can push the limits of technologies in NLP area as well. Follow engineering best practices to solve data matching, data search related problems Work closely with cross-functional teams in an agile environment. What we're looking for: You are a strong analytical and problem-solving skills. You are self-motivated learner. You are eager to learn new technologies. You are receptive to constructive feedback. You are Confident and articulate with excellent written and verbal communication skills. You are open to work in small development environment. Skills Required: Bachelorβs degree in computer science or equivalent discipline from a top engineering institution Adept in computer science fundamentals and passionate towards algorithms, programming and problem solving. 8- 11 years of Software Engineering experience in product companies is a plus. Should have experience in writing Production level code in Java or Scala . Good to have experience in writing Production level code in Python. Should have experience in Multithreading, Distributed Systems, Performance Optimization. Good knowledge of database concepts & proficient in SQL. Experience in Bigdata tech stack like Spark, Kafka & Airflow is a plus. Should have knowledge/experience on one of the cloud AWS/Azure/GCP. Experience in writing unit test cases & Integration test is a must. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Company Description Logikview Technologies Pvt. Ltd. is a forward-thinking data analytics services firm. As a strategic partner, we provide a comprehensive range of analytics services to our clients' business units or analytics teams. From setting up big data and analytics infrastructure to performing data transformations and building advanced predictive analytical engines, Logikview supports clients throughout their analytics journey. We offer ready-to-deploy productized analytics solutions in domains such as retail, telecom, education, and healthcare. Role Description We are seeking a full-time Technical Lead for an on-site role in Indore. As a Technical Lead, you will oversee a team of engineers, manage project timelines, and ensure the successful delivery of analytics solutions. Day-to-day tasks include designing and implementing data models, developing and optimizing data pipelines, and collaborating with cross-functional teams to address technical challenges. You will also be responsible for code reviews, mentoring team members, and staying updated with the latest technological advancements. Qualifications Proficiency in data modeling, data warehousing, and ETL processes Experience with big data technologies such as Hadoop, Spark, and Kafka Knowledge of programming languages like Python, Java, and SQL Strong understanding of machine learning algorithms and predictive analytics Excellent problem-solving skills and the ability to troubleshoot technical issues Proven experience in team leadership and project management Bachelor's or Master's degree in Computer Science, Information Technology, or a related field Relevant certifications in data analytics or big data technologies are a plus Show more Show less
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Intellismith Intellismith, founded in 2019, is a dynamic HR service and technology startup. Our mission is to tackle Indiaβs employability challenges head-on. We specialize in scaling talent acquisition and technology resource outsourcing. Also, as an IBM and Microsoft Business Partner, we leverage industry-leading solutions to enhance and diversify our offerings. As we chart our growth trajectory, weβre transitioning from a service-centric model to a product-focused company. Our journey involves building a cutting-edge skilling platform to empower Indian youth with domain-specific training, making them job-ready for the competitive market. Why Join Intellismith? Impactful Mission: Be part of a forward-thinking organisation committed to solving employability challenges. Your work directly contributes to bridging the skills gap and transforming lives. Innovation and Growth: Contribute to our exciting transition from services to products. Shape the future of our skilling platform and impact Indian youth positively. Collaborative Environment: Work alongside talented professionals across multiple locations. Our diverse teams foster creativity and learning. Entrepreneurial Spirit: Intellismith encourages fresh ideas and entrepreneurial thinking. Your voice matters here. As a leading outsourcing partners, we are hiring a Java Backend Developer to work on a project for our client, which is the largest provider of telecoms and mobile money services in 14 countries spanning Sub-Saharan, Central, and Western Africa. Job Details: Experience: 3-5 years of experience in java backend development. CTC Bracket: Competitive and commensurate with experience Qualification: BE / B Tech / MCA / BCA / MTech. Location: Gurugram (WFO - 5 days) Notice Period: Immediate to 15 days (Candidates with notice period of less than 30 days are preferred) Mandatory Skills: Core Java Spring boot Microservices Kafka Data Structure & Algorithm Required Skills: Proficiency in Computer Science fundamentals β object oriented design, data structures, algorithm - design, and complexity analysis Experience in developing RESTful APIs Strong understanding on both Monolithic and Micro Services architectures. Hands-on experience in developing and deploying Micro Services components preferably using Spring Boot Strong understanding on Docker and Kubernetes Experience with version control system such as Git & BitBucket Experience with SQL/NoSQL databases. Nice to have Jenkins and CI/CD Knowledge Nice to have Python understanding Nice to have understanding on Kafka Knowledge on build tools like Maven & Gradle #Java #Javadeveloper #Springboot #Microservices #Kafka #Coding #Immediatejoiner #career #ITJobs Show more Show less
Posted 3 days ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
As a DevOps Engineer at Freight Tiger, you will play a crucial role in managing and enhancing our cloud infrastructure. You will work closely with cross-functional teams to automate operations, ensure seamless deployment processes, monitor systems, enhance security and developer productivity. Key skills & responsibilities: Cloud Management: Design, configure, manage and secure cloud services, primarily AWS ecosystem including VPC, S3, API Gateway, EC2, IAM, Load Balancers, Kafka cluster & Elastic search clusters and develop scripts to automate operations and routine tasks. Linux & Database Administration: Have a strong foundation of Linux and hands on experience in system management and troubleshooting. Hands-on exposure of PostgreSQL, MySQL & MongoDB preferred. Docker & Kubernetes Expertise: Deploy, manage, and troubleshoot applications on docker and kubernetes. CI/CD Pipeline Development & IaaS: Create and maintain CI/CD pipelines using GitLab and Argo CD, utilize Terraform for automation of cloud resources provisioning and management. Observability and Monitoring: Implement monitoring solutions with Prometheus and Grafana, manage logging pipelines, and configure alerting systems. Use APM tools like New Relic to monitor and optimize application performance.. Preferred qualifications: Bachelorβs degree in Computer Science, Engineering, or a related field. 3-6 years of experience in a DevOps role or related position. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Mytholog Innovations At Mytholog Innovations, we turn visionary ideas into robust digital realities. Leveraging deep expertise in backend development, microservices, cloud-native architectures, and DevOps, we partner with clients to design scalable systems, enhance existing infrastructures, and deliver impactful engineering solutions. Our rigorous talent screening and commitment to excellence empower companies to build high-performing tech teams that drive sustained innovation and business growth. Job Description Weβre seeking a Senior Java Developer with hands-on experience in processing large volume of Kafka/RabbitMQ events , integrating complex third-party systems, and building rule-based engines using Drools . The ideal candidate is a backend powerhouse with a deep understanding of distributed systems, security-first architectures, and scalable integration patterns. If you're passionate about high-performance systems, clean architecture, and secure, maintainable code β this opportunity will challenge and reward you. Location: Remote Employment Type: Full-Time (Contract) Experience: Minimum 5 years Probation: 15 days Note: We are only looking for individual contractors. No agencies please. This is a full-time contractor role. It does not include traditional employee benefits (insurance, PF, etc.). Standard TDS will be deducted from payments, and tax filing is the contractorβs responsibility. Key Responsibilities Develop, and maintain high-throughput event-driven systems using RabbitMQ/Kafka . Design and implement Drools-based business rule engines for dynamic decision-making. Build secure, performant Java Spring Boot microservices with clear boundaries and responsibilities. Develop the integration of external systems/APIs with attention to reliability, fault tolerance, and retries. Implement and enforce strong security practices (authentication, authorization, encryption). Own and optimize event consumption patterns, consumer group management, dead-letter handling, and backpressure control. Requirements 5+ years of hands-on backend development experience in Java (Spring Boot) . Proven ability to process high volumes of Kafka/RabbitMQ messages at scale (multi-million/day range). Deep knowledge of event-driven architecture , distributed systems, and asynchronous processing. Proficiency with Drools or similar rule engines for dynamic business logic. Strong background in secure API development , OAuth2 , JWT , and data encryption techniques. Hands-on experience integrating third-party systems and APIs with resilience patterns. Familiarity with cloud-native deployment practices , Docker , and CI/CD workflows . Strong debugging, profiling, and performance-tuning capabilities. Excellent communication skills to interface with both technical and non-technical stakeholders. Flexibility to work aligned with client time zones. Bonus: Exposure to Resilience4j , WebFlux , or reactive programming . Performance Evaluation Plan Days 1β15: Probation & Onboarding Deliver a sample Kafka/RabbitMQ consumer with metrics, retries, and logging. Submit a technical assessment of the current integration or rule setup. Demonstrate ownership and proactive communication with the team. Days 16β30: Production Integration & Rule Logic Deliver a real-world event processor integrated with at least one external system. Implement business logic using Drools with full test coverage and documentation. Conduct a peer review or propose optimization to an existing event flow. Days 31β45: Scale, Secure, and Own Release a critical, production-grade event consumer or integration module. Patch a key security vulnerability or performance bottleneck. Establish yourself as a reliable backend expert across ongoing initiatives. ο»Ώ Benefits π‘ Fully Remote β Work from your preferred location π Global Exposure β Collaborate with fast-moving startups worldwide π€ Supportive Culture β Transparent, collaborative, and growth-oriented team π Certification Support β Timely reimbursement programs to boost your credentials π Performance-Focused Growth β Advancement based on impact, not tenure Show more Show less
Posted 3 days ago
7.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Mytholog Innovations At Mytholog Innovations, we turn visionary ideas into robust digital realities. Leveraging deep expertise in backend development, microservices, cloud-native architectures, and DevOps, we partner with clients to design scalable systems, enhance existing infrastructures, and deliver impactful engineering solutions. Our rigorous talent screening and commitment to excellence empower companies to build high-performing tech teams that drive sustained innovation and business growth. Job Description Weβre looking for a Product Manager (Delivery-Focused) to lead and orchestrate multiple client engagements involving Java backend , full stack , and React-based engineering teams. This role demands a hands-on leader who thrives in a services environment , can interface directly with clients , and drive delivery excellence across distributed teams. Youβll manage software delivery across fast-moving external projects for global clients. Location: Remote Employment Type: Full-Time (Contract) Experience: 7-9 years Probation: 15 days Note: We are only looking for individual contractors. No agencies please. This is a full-time contractor role. It does not include traditional employee benefits (insurance, PF, etc.). Standard TDS will be deducted from payments, and tax filing is the contractorβs responsibility. Key Responsibilities Own end-to-end project delivery for client-facing software services. Manage multiple cross-functional engineering teams across Java, Spring Boot, React, and cloud-native technologies. Serve as the primary point of contact for client stakeholders, managing expectations, timelines, and deliverables. Translate client goals into sprint plans, task breakdowns, and clear delivery objectives for engineers. Identify blockers and proactively resolve delivery risks across tech and communication layers. Maintain structured progress tracking, reporting, and sprint cadences (Jira, Notion, etc.). Ensure code quality, performance benchmarks, security, and uptime are maintained as per client SLAs. Coordinate between design, frontend, backend, and QA specialists to ensure cohesive, on-time delivery. Ensure teams follow best practices in agile, Git workflows, CI/CD, and secure development. Requirements 7-9 years of experience in product or delivery management within a services or consulting environment. Strong understanding of Java-based backend systems and React full stack architectures . Experience managing distributed teams working on external client projects. Technical familiarity with APIs, microservices, event-driven systems, and cloud-native solutions. Excellent client communication and stakeholder management skills β written and verbal. Strong grasp of Agile/Scrum, sprint planning, grooming, and execution across multiple workstreams. Hands-on experience with project management and documentation tools (Jira, Notion, Slack, Postman, GitHub, Figma). Flexibility to work aligned with client time zones. Bonus: Experience coordinating projects involving Kafka/RabbitMQ, cloud platforms, or Drools rule engines. Performance Evaluation Plan Days 1β15: Alignment & Setup Review and assess current active client projects. Lead a sprint planning session and publish sprint goals for at least one engagement. Build trust with internal engineering teams and demonstrate strong communication flow. Days 16β30: Active Project Execution Deliver a full sprint cycle with internal teams and maintain velocity across Java or React streams. Provide clients with timely status updates, blockers, and mitigation plans. Identify 1β2 process inefficiencies and propose improvements. Days 31β45: Multi-Stream Coordination Own delivery across multiple client projects . Establish consistent cadence for stakeholder reporting and retrospectives. Serve as a reliable bridge between client priorities and engineering execution. ο»Ώ Benefits π‘ Fully Remote β Work from your preferred location π Global Exposure β Collaborate with startups and enterprises worldwide π€ Supportive Culture β Transparent, collaborative, and growth-oriented team π Certification Support β Timely reimbursement programs to boost your credentials π Performance-Focused Growth β Advancement based on impact, not tenure Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customersβ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the companyβs products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Requirements 4+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: sns,data,snowflake,terraform,data engineer,big query,redshift,sqs,dagster,ci,etl,aws step functions,elt,cd,python,aws kinesis,dms,s3,cloud,airflow,ci/cd,dbt,glue,terragrunt,kafka,sql,athena,aws Show more Show less
Posted 3 days ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Remote
Hiring for US based Multinational Company (MNC) We are seeking a skilled and detail-oriented Data Engineer to join our team. In this role, you will design, build, and maintain scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. You will work closely with data scientists, analysts, and software engineers to ensure that high-quality data is readily available and usable. Design and implement scalable, reliable, and efficient data pipelines for processing and transforming large volumes of structured and unstructured data. Build and maintain data architectures including databases, data warehouses, and data lakes. Collaborate with data analysts and scientists to support their data needs and ensure data integrity and consistency. Optimize data systems for performance, cost, and scalability. Implement data quality checks, validation, and monitoring processes. Develop ETL/ELT workflows using modern tools and platforms. Ensure data security and compliance with relevant data protection regulations. Monitor and troubleshoot production data systems and pipelines. Proven experience as a Data Engineer or in a similar role Strong proficiency in SQL and at least one programming language such as Python, Scala, or Java Experience with data pipeline tools such as Apache Airflow, Luigi, or similar Familiarity with modern data platforms and tools: Big Data: Hadoop, Spark Data Warehousing: Snowflake, Redshift, BigQuery, Azure Synapse Databases: PostgreSQL, MySQL, MongoDB Experience with cloud platforms (AWS, Azure, or GCP) Knowledge of data modeling, schema design, and ETL best practices Strong analytical and problem-solving skills
Posted 3 days ago
4.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Requirements 4+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Must be fluent in English communication. Desired Experience Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Skills: sns,data,snowflake,terraform,data engineer,big query,redshift,sqs,dagster,ci,etl,aws step functions,elt,cd,python,aws kinesis,dms,s3,cloud,airflow,ci/cd,dbt,glue,terragrunt,kafka,sql,athena,aws Show more Show less
Posted 3 days ago
0.0 - 1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
We're Hiring: GCP DevOps Engineer (with Node.js Skills) Locations: Bengaluru / Chennai / Pune / Hyderabad / Vadodara (On-site/Hybrid as per role) Positions Available: 3 Employment Type: Full-time Salary: βΉ10β14 LPA (Based on experience and interview performance) About the Role: We are looking for passionate and curious GCP DevOps Engineers who are comfortable working in dynamic environments and love combining DevOps best practices with backend development. If you have 1β3 years of hands-on experience, basic knowledge of Node.js , and a solid grip on GCP, Kubernetes, and Git , this could be the perfect role to elevate your career. What Youβll Be Doing: Deploy, manage, and monitor cloud infrastructure on Google Cloud Platform (GCP) Work with Kubernetes to orchestrate containerized applications Collaborate with developers to integrate Node.js -based services and APIs Handle Kafka messaging pipelines (consumers & producers) Manage PostgreSQL databases (schema design, queries, performance tuning) Utilize Git and GitHub for version control, code reviews, and CI workflows Use VS Code or similar IDEs for development and troubleshooting Troubleshoot issues independently and ensure smooth deployment cycles Collaborate effectively in distributed teams and maintain clear documentation Minimum Qualifications: Bachelorβs degree in Computer Science, Engineering, or equivalent practical experience 1β3 years of hands-on experience in software development or DevOps engineering Key Skills Weβre Looking For: Google Cloud Platform (GCP) services Kubernetes and containerization tools Basic to intermediate Node.js development (especially REST APIs/backend services) Apache Kafka (publishing/consuming messages) PostgreSQL or similar RDBMS Git, GitHub, and collaborative workflows Excellent troubleshooting, problem-solving, and team collaboration skills Good to Have: Experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions) Familiarity with Agile/Scrum methodologies Exposure to observability tools (Prometheus, Grafana, ELK, etc.) Why Join Us? Work on impactful, production-grade cloud solutions Collaborate with highly skilled teams across geographies Gain experience across cutting-edge DevOps stacks Fast-paced, learning-rich environment with room to grow Job Types: Full-time, Permanent Pay: βΉ1,000,000.00 - βΉ1,400,000.00 per year Schedule: Day shift Monday to Friday Ability to commute/relocate: Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Google Cloud Platform: 2 years (Required) Kubernetes: 1 year (Required) Node.js: 1 year (Preferred) Work Location: In person
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Budget: 3.5x Notice : Immediate joiners Requirements : β’ BS degree in computer science, computer engineering or equivalent β’ 5-9 years of experience delivering enterprise software solutions β’ Familiar with Spark, Scala, Python, AWS Cloud technologies β’ 2+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala β’ Flair for data, schema, data model, how to bring efficiency in big data related life cycle. β’ Experience with Agile Development methodologies. β’ Experience with data ingestion and transformation β’ Have understanding for secure application development methodologies. β’ Experience in with Airflow and Python will be preferred. β’ Understanding of automated QA needs related to Big data technology. β’ Strong object-oriented design and analysis skills β’ Excellent written and verbal communication skills Responsibilities β’ Utilize your software engineering skills including Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services β’ Integrate new data sources and tools β’ Implement scalable and reliable distributed data replication strategies β’ Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases β’ Perform analysis of large data sets using components from the Hadoop ecosystem β’ Own product features from the development, testing through to production deployment β’ Evaluate big data technologies and prototype solutions to improve our data processing architecture β’ Automate different pipelines Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.
These cities are known for their thriving tech industries and have a high demand for Kafka professionals.
The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.
Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.
In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture
As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.