Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Come join Intuit as a Software Engineer 2 in the QuickBooks Online Payroll team. You will join an innovative and passionate team of engineers using cutting edge technologies like React, Spring Boot, AI, Kubernetes, AWS, Elastic Search, Kafka, and globally distributed services. We are looking for an engineer with a strong background in back-end web technologies (Java, Spring, REST services etc.) You will be working on features and services that enhance the product set and delight our Small and Mid-market Business customers What you'll bring At least 3 – 5 years’ experience developing web, software, or mobile applications BS/MS in computer science or equivalent work experience. Strong Object Oriented Programming concepts Strong Java, Java EE skills and Spring framework Strong experience in one of the leading Javascript Frameworks Strong experience in back end programming in Java / Java EE, Springboot Have at least 3 years for experience in Server Side Technologies In-depth understanding of AI/ML concepts and their relevance to application development. Exposure to AI-related tools and libraries used in software development. Experience in AWS and Kubernetes an added advantage Experience in DevOps an added advantage Experience in handling mission critical services and platform – an added advantage Have at least 6 years for experience in Server Side Technologies Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences Passion in being the technology ambassador and coaching engineering excellence to junior engineers Strong understanding of the Software design/architecture process How you will lead Be the technology leader and demonstrate ownership of critical platform services Gathering functional requirements, developing technical specifications, and project & test planning Responsible for the design and architecture of the project Responsible for engineering and operational excellence for the team’s deliverables Designing/developing REST services with high availability and resiliency Implementing world class user experience working closely with designers and product owners Act in a technical leadership capacity: Mentoring junior engineers, new team members, and applying technical expertise to challenging programming and design problems Roughly 80% hands-on coding Awareness of AI concepts and their potential application in software development. Ability to utilize existing AI-powered tools and APIs in development tasks. Experience with IDE like Windsurf, Qudo, Cursor is an added advantage End to end engineering, quality focus with world class engineering and operational excellence. Devops responsibilities with Infrastructure as a code philosophy Innovation Champion – creative ways of solving customer issues within constraints Work cross-functionally with various Intuit teams: product management, various product lines, or business units to drive forward results Experience with Agile Development, SCRUM, or Extreme Programming methodologies Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Come join Intuit as a Software Engineer 2 in the QuickBooks Online Payroll team. You will join an innovative and passionate team of engineers using cutting edge technologies like React, Spring Boot, AI, Kubernetes, AWS, Elastic Search, Kafka, and globally distributed services. We are looking for an engineer with a strong background in back-end web technologies (Java, Spring, REST services etc.) You will be working on features and services that enhance the product set and delight our Small and Mid-market Business customers What you'll bring At least 3 – 5 years’ experience developing web, software, or mobile applications BS/MS in computer science or equivalent work experience. Strong Object Oriented Programming concepts Strong Java, Java EE skills and Spring framework Strong experience in one of the leading Javascript Frameworks Strong experience in back end programming in Java / Java EE, Springboot Have at least 3 years for experience in Server Side Technologies In-depth understanding of AI/ML concepts and their relevance to application development. Exposure to AI-related tools and libraries used in software development. Experience in AWS and Kubernetes an added advantage Experience in DevOps an added advantage Experience in handling mission critical services and platform – an added advantage Have at least 6 years for experience in Server Side Technologies Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences Passion in being the technology ambassador and coaching engineering excellence to junior engineers Strong understanding of the Software design/architecture process How you will lead Be the technology leader and demonstrate ownership of critical platform services Gathering functional requirements, developing technical specifications, and project & test planning Responsible for the design and architecture of the project Responsible for engineering and operational excellence for the team’s deliverables Designing/developing REST services with high availability and resiliency Implementing world class user experience working closely with designers and product owners Act in a technical leadership capacity: Mentoring junior engineers, new team members, and applying technical expertise to challenging programming and design problems Roughly 80% hands-on coding Awareness of AI concepts and their potential application in software development. Ability to utilize existing AI-powered tools and APIs in development tasks. Experience with IDE like Windsurf, Qudo, Cursor is an added advantage End to end engineering, quality focus with world class engineering and operational excellence. Devops responsibilities with Infrastructure as a code philosophy Innovation Champion – creative ways of solving customer issues within constraints Work cross-functionally with various Intuit teams: product management, various product lines, or business units to drive forward results Show more Show less
Posted 1 day ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from Kellton Tech!! Job Title : Java & ADF Developer / Java with Springboot Location : Hyderabad (Onsite – Client Location) Experience : 5-12 years Employment Type : Full-time / Contract (as applicable) Joining : Immediate to 30 days preferred About Kellton: We are a global IT services and digital product design and development company with subsidiaries that serve startup, mid-market, and enterprise clients across diverse industries, including Finance, Healthcare, Manufacturing, Retail, Government, and Nonprofits. At Kellton, we believe that our people are our greatest asset. We are committed to fostering a culture of collaboration, innovation, and continuous learning. Our core values include integrity, customer focus, teamwork, and excellence. To learn more about our organization, please visit us at www.kellton.com Are you craving a dynamic and autonomous work environment? If so, this opportunity may be just what you're looking for. At our company, we value your critical thinking skills and encourage your input and creative ideas to supply the best talent available. To boost your productivity, we provide a comprehensive suite of IT tools and practices backed by an experienced team to work with. Req 1: Java with Springboot Technical Skills: Java (should also be able to work on older versions – Versions 7 & 8) Spring Boot, Spring JPA, Spring Security MySQL IDEs: Primarily NetBeans, also Eclipse Jasper Reports Application Servers: Tomcat, JBoss (WildFly) Basic knowledge of Linux Day-to-Day Responsibilities: Handling API-related issues and bug fixes Developing new APIs and features as per business requirements Coordinating and deploying builds in UAT environments Collaborating with the QA and product teams to ensure smooth releases Addl Skillset Info: Java , Spring Boot , Hibernate , Junit , JWT, OAuth, Redis, Docker , Kafka (Optional) , Open api standards , Jenkins/Git Pipeline, etc Req:2 Java & Oracle ADF Developer About the Role: We are looking for a skilled Java and Oracle ADF Developer to join our team for an on-site deployment at our client’s location in Hyderabad. The ideal candidate should have a solid background in Java development, Oracle ADF, and associated tools and technologies, strong problem-solving abilities, and experience working in a Linux-based environment. Key Responsibilities Develop and maintain enterprise-grade applications using Oracle ADF and Java 7/8 . Design and implement reports using Jasper Reports and iReport . Manage deployments and configurations on the JBoss application server. Work with development tools such as NetBeans , Eclipse , or JDeveloper . Perform data management tasks using MySQL . Write and maintain Shell scripts and configure cron jobs for scheduled tasks. Administer and monitor systems in a Linux environment. Utilize Apache Superset for data visualization and dashboard reporting. Collaborate with cross-functional teams to deliver high-quality solutions on time. Troubleshoot issues and provide timely resolutions. Required Skills Proficiency in Java 7/8 and object-oriented programming Strong hands-on experience with Oracle ADF Expertise in Jasper Reports , iReport , and report generation Experience with JBoss server setup and application deployment Familiarity with NetBeans , Eclipse , or JDeveloper IDEs Good understanding of MySQL database design and queries Experience with Linux OS and shell scripting Ability to set up and manage cron jobs Knowledge of Apache Superset or similar BI tools Strong problem-solving and debugging skills Good to Have Exposure to Agile development practices Familiarity with REST APIs and web services Knowledge of version control tools (e.g., Git) Education Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. What we offer you: · Existing clients in multiple domains to work. · Strong and efficient team committed to quality output. · Enhance your knowledge and gain industry domain expertise by working in varied roles. · A team of experienced, fun, and collaborative colleagues · Hybrid work arrangement for flexibility and work-life balance (If the client/project allows) · Competitive base salary and job satisfaction. Join our team and become part of an exciting company where your expertise and ideas are valued, and where you can make a significant impact in the IT industry. Apply today! Interested applicants, please submit your detailed resume stating your current and expected compensation and notice period to srahaman@kellton.com Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Designation: Solution Architect Office Location: Gurgaon Position Description: As a Solution Architect, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team & close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE: 542752 & NSE:AFFLE). Affle Holdings is the Singapore based promoter for Affle India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details: www.affle.com About BU : Ultra - Access deals, coupons, and walled gardens based user acquisition on a single platform to offer bottom-funnel optimization across multiple inventory sources. For more details, please visit: https://www.ultraplatform.io/ Show more Show less
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: Technical Skills : Proficiency in Java, Spring Boot, microservices, REST APIs, Kafka, Keibina, O MELT, and Azure Cloud. Domain Knowledge: Expertise in warehouse execution systems, warehouse control systems, and material handling domains – Good to have. Orchestration Tools: Experience with Camunda orchestration and delegate writing -Good to have. AI Tools: Strong proficiency in AI tools for exponential productivity. Stakeholder Management: Excellent stakeholder management skills, including client communication and relationship building. Team Leadership: Ability to lead and mentor a team, fostering a positive work environment. Methodologies: Proficient in Agile/Scrum methodologies. Tools and Technologies: Proficiency in GIT/Bitbucket, CI/CD pipelines, and other relevant tools. Qualifications : Bachelor’s degree in Computer Science, Engineering, or related field. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Proven ability to meet deadlines and deliver high-quality results. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 day ago
5.0 - 8.0 years
14 - 20 Lacs
Pune
Work from Office
End to end application development,technical analysis of requirements & lead the project, provide technical solutions, interaction with customer and stakeholders,application development, microsoft technology, technical analysis, lead project
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. You’ll be Responsible for ? Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. You’d have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. • Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why Join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
India
Remote
Test Strategy & Plan Map user stories (DPR, Mess QR, Leave, Incident, Material Req, HR) to acceptance criteria. Define the “shift-left” gates: PR smoke ➝ nightly regression ➝ pre-release performance ➝ post-deploy canary. Automated Functional Testing Build PyTest-BDD suites against ERPs REST UI & bot chat flows. Mock WhatsApp Cloud API via WireMock ; replay Meta payloads in CI. API Contract & Schema Tests Use Pact (Python) for Bot ⇄ Worker message schemas; enforce in GitHub Actions. Validate EventBridge / Kafka events against Confluent Schema Registry. Performance & Load k6 scripts for Bot API (5 k RPS) and Kafka throughput. Locust browser flows (100 concurrent users) against ERP Desk. Publish SLO dashboards in Grafana. Master Test Orchestrator Write a single Makefile / tox entry that: make ci → spins ephemeral EKS namespace ➝ seeds test data ➝ runs full suite ➝ tears down. Integrate with GitHub Actions reusable workflow. Chaos & DR Drills Inject RDS fail-over, MSK broker stop, and Redis node kill; assert that tests still pass or alert. SOP & Documentation Produce “QA Run-book” wiki: pipeline diagrams, how to add a test, how to debug failures. Checklist for each release: tag, run pipeline, sign-off matrix. Knowledge Transfer Conduct two 2-hour remote workshops: “Writing a new test in 15 minutes” and “Reading Grafana QA dashboards.” Deliverables & Timeline Zero critical regressions escape to production during 90-day window. < 15 min PR gate; nightly regression under 45 min. < 2 h / month maintenance on test scripts (measured by Jira log). All new features merge only when green master pipeline . Must-Have Skills Advanced Python test automation (PyTest, fixtures, mocks). Deep knowledge of CI pipelines on GitHub Actions. Load-testing with k6 / Locust and interpreting results. Experience testing micro-service event systems (Kafka, SQS, EventBridge). Clear technical writing—can turn pipelines into beginner-friendly SOPs. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Andhra Pradesh, India
On-site
8-10 years of experience in Pega and 6 - 8 years of experience in Pega CDH The candidates will act as a CDH SME for Pega Platform perspective preferably in banking and financial services. Pega CSSA Certification and Pega Decisioning Consultant Experience in Pega CDH 8.X or higher versions,NBAD framework Proven track record of delivering hyper-personalized solutions using NBA strategies across inbound and outbound channels in both batch and real-time mode Hands-on experience with Pega CDH components like ADM, DSM, Value Finder, Customer Profiler, etc. Proficiency in SQL, JSON, real-time decisioning, and object-oriented programming principles. Translate business requirements into decisioning logic, including Next-Best-Action (NBA) strategies, engagement policies, and offer flows. Good understanding and experience in Digital applications, and Business Functions Strong analytical skills with the ability to summarize data into actionable insights. Excellent communication skills to interact with both technical and non-technical stakeholders. Designing in alignment with Clients Business objective Good to have skills: Operation Manager Knowledge in externalisation of Cassandra , KAFKA, Elastic search Familiarity with DevOps practices such as CI/CD pipelines for deployment automation. Show more Show less
Posted 1 day ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Description AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. Key job responsibilities Responsibilities A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. Assist internal customers with identifying model drift and retraining models. Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About The Team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 7+ years of professional or military experience, including a Bachelor's degree. 7+ years managing complex, large-scale projects with internal or external customers. Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. Preferred Qualifications 7+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A3009199 Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Are you ready to make your mark with a true industry disruptor? ZineOne, a subsidiary of Session AI, the pioneer of in-session marketing, is looking to add talented team members to help us grow into the premier revenue tool for e-commerce. We work with some of the leading brands nationwide and we innovate how brands connect with and convert customers. Job Description This position offers a hands-on, technical opportunity as a vital member of the Site Reliability Engineering Group. Our SRE team is dedicated to ensuring that our Cloud platform operates seamlessly, efficiently, and reliably at scale. The ideal candidate will bring over five years of experience managing cloud-based Big Data solutions, with a strong commitment to resolving operational challenges through automation and sophisticated software tools. Candidates must uphold a high standard of excellence and possess robust communication skills, both written and verbal. A strong customer focus and deep technical expertise in areas such as Linux, automation, application performance, databases, load balancers, networks, and storage systems are essential. Key Responsibilities: As a Session AI SRE, you will: Design and implement solutions that enhance the availability, performance, and stability of our systems, services, and products Develop, automate, and maintain infrastructure as code for provisioning environments in AWS, Azure, and GCP Deploy modern automated solutions that enable automatic scaling of the core platform and features in the cloud Apply cybersecurity best practices to safeguard our production infrastructure Collaborate on DevOps automation, continuous integration, test automation, and continuous delivery for the Session AI platform and its new features Manage data engineering tasks to ensure accurate and efficient data integration into our platform and outbound systems Utilize expertise in DevOps best practices, shell scripting, Python, Java, and other programming languages, while continually exploring new technologies for automation solutions Design and implement monitoring tools for service health, including fault detection, alerting, and recovery systems Oversee business continuity and disaster recovery operations Create and maintain operational documentation, focusing on reducing operational costs and enhancing procedures Demonstrate a continuous learning attitude with a commitment to exploring emerging technologies Preferred Skills: Experience with cloud platforms like AWS, Azure, and GCP, including their management consoles and CLI Proficiency in building and maintaining infrastructure on: AWS using services such as EC2, S3, ELB, VPC, CloudFront, Glue, Athena, etc Azure using services such as Azure VMs, Blob Storage, Azure Functions, Virtual Networks, Azure Active Directory, Azure SQL Database, etc GCP using services such as Compute Engine, Cloud Storage, Cloud Functions, VPC, Cloud IAM, BigQuery, etc Expertise in Linux system administration and performance tuning Strong programming skills in Python, Bash, and NodeJS In-depth knowledge of container technologies like Docker and Kubernetes Experience with real-time, big data platforms including architectures like HDFS/Hbase, Zookeeper, and Kafka Familiarity with central logging systems such as ELK (Elasticsearch, LogStash, Kibana) Competence in implementing monitoring solutions using tools like Grafana, Telegraf, and Influx Benefits Comparable salary package and stock options Opportunity for continuous learning Fully sponsored EAP services Excellent work culture Opportunity to be an integral part of our growth story and grow with our company Health insurance for employees and dependents Flexible work hours Remote-friendly company Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Company Description Gfk is seeking a Middleware Engineer with hands-on Java & Python experience and proven analytical and problem-solving skills. The ideal candidate will be responsible for the configuration, deployment, and management of middleware systems to support enterprise applications. This role involves working closely with development, operations and infrastructure teams to ensure the seamless integration of applications and systems while optimizing performance and reliability Job Description Install, configure and maintain middleware technologies (experience with any of these: Websphere, Weblogic, Tomcat, JBoss, Kafka, RabbitMQ or similar) Ensure high availability, scalability and reliability of middleware systems Design and implement solutions for system and application integration Automate routine tasks, processes, legacy data fusion Optimize middleware performance and recommend improvements Design and development of middleware components Design and implement API necessary for the integration and or data consumption Work independently and collaboratively on a multi-disciplined project team in an Agile development environment Be actively involved in the design, development and testing activities for Big data product Provide feedback to development teams on code/architecture optimization Qualifications Education Bachelor of Science degree from an accredited university Required Skills And Experience 6+ years of hands-on experience developing Java, Spring, Python Hands-on experience with the Spring Tool Suite to include Spring Boot, Spring Boot Oauth, Spring Security, Spring Data JPA, and Spring Batch Understanding Relational Databases, such as Oracle, SQL Server, MySQL, Postgres or similar Fluency in Java/J2EE, JSP, Web Services. Must have experience with JAVA 8 or higher. Experience with JMS, Kafka, IBM MQ or similar Experience using software project tracking tools such as Jira Familiarity with Azure services Proven experience with CI/CD. Proven experience with Jenkins, Ansible, Docker, Kubernetes Proven experience with version control (Github, Bitbucket) Familiarity with Linux OS/concepts Strong written and verbal communication skills Self-motivated and ability to work well in a team Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
THIS IS A LONG TERM [12+ Months] CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER . Candidate must be present in the Pune office whenever there are team meetings, that would happen ~once a month. Our large, Fortune client is ranked as one of the best companies to work with, in the world. The client fosters progressive culture, creativity, and a Flexible work environment. They use cutting-edge technologies to keep themselves ahead of the curve. Diversity in all aspects is respected. Integrity, experience, honesty, people, humanity, and passion for excellence are some other adjectives that define this global technology leader. Minimum Qualifications: 5+ Years of strong expertise in Python and frameworks like Django, Flask, or FastAPI. Solid understanding of object-oriented design, data structures, and algorithms. Proficient with relational and NoSQL databases (e.g., PostgreSQL, MongoDB). Hands-on experience with Docker, Kubernetes, and CI/CD pipelines. Knowledge of message queues (e.g., RabbitMQ, Kafka) and event-driven architecture. Excellent problem-solving and debugging skills. Strong communication and leadership abilities. Additional Qualifications: Experience with distributed systems or real-time processing. Exposure to frontend technologies (React, Angular) is a plus. Key Responsibilities: Design, develop, and maintain scalable and high-performance backend systems using Python. Architect and implement RESTful APIs and integrate third-party services. Lead and mentor a team of developers, ensuring code quality through reviews and best practices. Work closely with DevOps to deploy and maintain cloud infrastructure (AWS, GCP, or Azure). Collaborate with product managers, designers, and QA engineers to deliver high-quality products. Optimize application performance and scalability. Ensure adherence to secure coding standards and compliance requirements. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Full-time Company Description Gfk is seeking a Middleware Engineer with hands-on Java & Python experience and proven analytical and problem-solving skills. The ideal candidate will be responsible for the configuration, deployment, and management of middleware systems to support enterprise applications. This role involves working closely with development, operations and infrastructure teams to ensure the seamless integration of applications and systems while optimizing performance and reliability Job Description Install, configure and maintain middleware technologies (experience with any of these: Websphere, Weblogic, Tomcat, JBoss, Kafka, RabbitMQ or similar). Ensure high availability, scalability and reliability of middleware systems. Design and implement solutions for system and application integration. Automate routine tasks, processes, legacy data fusion. Optimize middleware performance and recommend improvements. Design and development of middleware components. Design and implement API necessary for the integration and or data consumption. Work independently and collaboratively on a multi-disciplined project team in an Agile development environment. Be actively involved in the design, development and testing activities for Big data product. Provide feedback to development teams on code/architecture optimization. Qualifications Education Bachelor of Science degree from an accredited university Required Skills And Experience 6+ years of hands-on experience developing Java, Spring, Python. Hands-on experience with the Spring Tool Suite to include Spring Boot, Spring Boot Oauth, Spring Security, Spring Data JPA, and Spring Batch Understanding Relational Databases, such as Oracle, SQL Server, MySQL, Postgres or similar. Fluency in Java/J2EE, JSP, Web Services. Must have experience with JAVA 8 or higher. Experience with JMS, Kafka, IBM MQ or similar. Experience using software project tracking tools such as Jira. Familiarity with Azure services. Proven experience with CI/CD. Proven experience with Jenkins, Ansible, Docker, Kubernetes. Proven experience with version control (Github, Bitbucket). Familiarity with Linux OS/concepts Strong written and verbal communication skills. Self-motivated and ability to work well in a team. Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Panasonic Avionics Corporation, a leading provider of in-flight entertainment and communication solutions, is seeking a dynamic and experienced Architect to join their esteemed team in Pune. Exp : 10-17 years Work Mode : Onsite - Work from Office only Location : Pune About the Role: As a Principal Engineer, you will: Design and implement scalable, high-performance systems for digital platforms. Lead the development of Android applications, middleware services, and AWS cloud solutions. Architect low-latency networking systems and secure communication protocols for IoT/enterprise. Harness big data, machine learning, and edge computing to enable real-time decision-making. Build RESTful APIs, optimize CI/CD pipelines, and manage infrastructure using AWS CloudFormation. Collaborate with clients and cross-functional teams to deliver tailored, innovative solutions. Key Skills We’re Looking For: 15+ years of experience in web/mobile development, middleware design, and AWS cloud. Expertise in C/C++, Java, Python, Kotlin, and networking protocols (TCP/IP). Proficiency in big data tools (Spark, Kafka), ML frameworks, and edge computing. Hands-on experience with CI/CD (GitLab), monitoring tools (CloudWatch, Datadog), and Agile methodologies. Strong leadership, communication, and client engagement skills. Education & Preferences: Bachelor’s degree (required) in Computer Science or related field; Master’s preferred. Familiarity with multimedia streaming, IoT, or Agile/Scrum is a plus. Interested candidates share your updated profile with me Sam.Thilak@antal.com Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Sprinklr is a leading enterprise software company for all customer-facing functions. With advanced AI, Sprinklr's unified customer experience management (Unified-CXM) platform helps companies deliver human experiences to every customer, every time, across any modern channel. Headquartered in New York City with employees around the world, Sprinklr works with more than 1,000 of the world’s most valuable enterprises - global brands like Microsoft, P&G, Samsung and more than 50% of the Fortune 100. What Does Success Look Like? We are looking for a Senior Engineering Manager to lead and scale a team of high-caliber backend and platform engineers building distributed systems that power our mission-critical CCaaS product. As a technical leader and people manager, you’ll be responsible for both technical excellence and organizational health - driving architecture, execution, and team growth in a fast-paced, product-led SaaS environment. This is a high-impact role for someone who thrives on solving complex engineering problems at scale, enabling a team to operate at their peak, and building platforms that directly drive business outcomes. Seniority Level: Senior Manager / Hands-on People C Technical Leader Reports to: Director of Engineering or VP Engineering Team Size: 8–15 Engineers (Leads + ICs) Technology Stack: Java, Spring Boot, Kafka, Redis, MongoDB, Postgres, Kubernetes, AWS What You’ll Do: Technical Leadership: Lead design and delivery of scalable, distributed backend systems and real-time platform APIs using Java-based microservices. Partner with Architects and Tech Leads to establish technical vision, system design, and long-term architecture . Drive engineering excellence through code reviews, design reviews, observability, performance tuning, and SLAs . Own end-to-end system reliability, scalability, cost, and maintainability. People Management: Manage, coach, and grow a team of backend engineers across levels. Drive career development , technical mentoring , and regular 1:1s . Foster a high-performance, inclusive culture grounded in ownership, autonomy, and accountability. Recruit and onboard exceptional engineering talent; collaborate with TA and interview panel on hiring strategy. Executions Delivery: Drive sprint planning, estimation, and delivery across multiple squads or initiatives. Partner with Product and Program Managers to align engineering execution with business goals. Set and monitor engineering OKRs , team velocity, and project health metrics. Proactively identify tech debt, risks, and improvement areas. Cross-Functional Collaboration: Work closely with Product, DevOps, QA, and Customer Support teams to ensure end-to-end solution delivery. Represent Engineering in roadmap planning, executive reviews, and customer- facing discussions (when needed) What Makes You Qualified? 8 to 12 years of total experience, with at least 2+ years in engineering leadership roles . Deep experience designing, building, and operating Java-based microservices in cloud-native, distributed environments . Strong understanding of backend architectural patterns. Proven track record of building and scaling high-performing engineering teams . Experience with Kafka, Redis, MongoDB/PostgreSQL, Spring Boot, Kubernetes, REST APIs, CI/CD pipelines. Strong communication and stakeholder management skills Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
🚀 Job Opening: Full Stack Developer 📍 Location: Mumbai, India 📩 Apply Now: hr@victreesolutions.com #FullStackDeveloper #Java #ReactJS #Microservices #Kafka #Angular #Docker #SpringBoot #HiringNow Role Overview: We are looking for a passionate and skilled Full Stack Developer to join our dynamic development team. You will be responsible for building scalable, robust, and dynamic front-end and back-end solutions, primarily using Java and modern JS frameworks. Key Responsibilities: Analyze requirements and perform impact analysis. Design and develop dynamic front-end and back-end applications. Collaborate with product managers and cross-functional teams. Prepare software releases and maintain continuous improvements. Stay up-to-date with emerging tools, frameworks, and tech practices. Core Technical Skills Required: Proficient in Java 8 , Spring Boot , and Microservices Architecture Experience with Kafka and gRPC Strong understanding of REST APIs , HTML/CSS/JavaScript Solid hands-on with ReactJS (incl. React Hooks), NPM Familiarity with Docker , Kubernetes , and Cloud platforms (AWS, Azure, GCP) Experience with SQL and NoSQL databases Nice to Have: Knowledge of WebSockets , Java Threads , Executor Service , and Lightstreamer Experience with Node.js , Maven , Git, and Design Patterns What We Offer: Opportunity to work on cutting-edge tech in a niche industry Collaborative and inclusive team environment Fast-paced, learning-focused culture If you're passionate about full stack development and want to work on impactful enterprise software solutions, we'd love to hear from you! 📩 Send your resume to: hr@victreesolutions.com 🕒 Apply ASAP – Limited Positions Available! Show more Show less
Posted 1 day ago
0 years
0 Lacs
Raipur, Chhattisgarh, India
On-site
Role Summary We are seeking a highly motivated and skilled Data Engineer to join our data and analytics team. This role is ideal for someone with strong experience in building scalable data pipelines, working with modern lakehouse architectures, and deploying data solutions on Microsoft Azure. You’ll be instrumental in developing, orchestrating, and maintaining our real-time and batch data infrastructure using tools like Apache Spark, Apache Kafka, Apache Airflow, Azure Data Services, and modern DevOps practices. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory, Databricks, or Apache Spark. Work with Azure Blob Storage, Data Lake, and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka, Apache Flink, or Apache Beam. Build and schedule jobs using orchestration tools like Apache Airflow or Dagster. Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake. Manage data cataloging and lineage using tools like Marquez or Collibra. Collaborate with DevOps teams to containerize solutions using Docker, manage infrastructure with Terraform, and deploy on Kubernetes. Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills & Qualifications Programming & Scripting: Proficiency in Python, with strong knowledge of OOP and data structures & algorithms. Comfortable working in Linux environments for development and deployment. Database Technologies: Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Big Data & Real-Time Processing: Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka, Flink, or Beam. Orchestration & Scheduling: Hands-on experience with Airflow, Dagster, or similar orchestration tools. Cloud Platform: Deep experience with Microsoft Azure, especially Azure Data Factory, Blob Storage, Synapse, Azure Functions, etc. AZ-900 or other Azure certifications are a plus. Lakehouse & Warehousing Knowledge of dimensional modeling, Snowflake, Apache Iceberg, and Delta Lake. Understanding of modern Lakehouse architecture and related best practices. Data Cataloging & Governance Familiarity with Marquez, Collibra, or other cataloging tools. DevOps & CI/CD Experience with Terraform, Docker, Kubernetes, and Jenkins or equivalent CI/CD tools. Monitoring & Logging Proficiency in setting up dashboards and alerts with Prometheus and Grafana. Note: - Immediate joiner will be preferred. Show more Show less
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role - Java Developer Experience - 3-5 yrs Location - Bangalore Backend ● Bachelors/Masters in Computer science from a reputed institute/university ● 3-7 years of strong experience in building Java/golang/python based server side solutions ● Strong in data structure, algorithm and software design ● Experience in designing and building RESTful micro services ● Experience with Server side frameworks such as JPA (HIbernate/SpringData), Spring, vertex, Springboot, Redis, Kafka, Lucene/Solr/ElasticSearch etc. ● Experience in data modeling and design, database query tuning ● Experience in MySQL and strong understanding of relational databases. ● Comfortable with agile, iterative development practices ● Excellent communication (verbal & written), interpersonal and leadership skills ● Previous experience as part of a Start-up or a Product company. ● Experience with AWS technologies would be a plus ● Experience with reactive programming frameworks would be a plus · Contributions to opensource are a plus ● Familiarity with deployment architecture principles and prior experience with container orchestration platforms, particularly Kubernetes, would be a significant advantage Show more Show less
Posted 1 day ago
5.0 - 10.0 years
20 - 27 Lacs
Bengaluru
Work from Office
We are hiring for one of our client for Automation QA for Bangalore - Marathahalli location. Location : Bangalore - Marathahalli (Hybrid) Experience : 5-9 Years Budget : 27 LPA Mandate Skills : Python , AWS , Any Framework , Selenium , Kafka (Knowledge), AI/ML(Gen AI etc..). Technical Skills: Proven experience in building automation frameworks for both frontend and backend systems. Hands-on experience with AWS services such as EC2, S3, Lambda, and Kafka. Strong understanding of Kafka, including producing and consuming messages for test automation. Hands-on experience with AI/ML tools for automation, such as Testim, Mabl, Functionize, or custom AI models (e.g., for NLP-based test generation or failure prediction). Prior experience in setting up AI-based test data generation. Experience in Python programming language. Familiarity with test automation tools and frameworks like Robot framework, Playwright, Selenium or similar. Strong understanding of REST APIs and API testing tools like Postman, Rest Assured etc.. Experience with CI/CD pipelines and tools such as Jenkins, GitLab etc Exposure to Graph DB, MongoDB, and Cassandra. Knowledge of Docker/Kubernetes is a plus Excellent problem-solving skills and attention to detail. Experience working in Agile projects Strong communication and collaboration skills to work effectively in a team environment. Essential Skills 1. Building automation frameworks for both frontend and backend systems. 2. Python Programing 3. AWS services such as EC2, S3, Lambda 4. Strong understanding of Kafka 5. AI/ML tools for automation, such as Testim, Mabl, Functionize (Good to have) 1. Experience with CI/CD pipelines 2. test automation tools and frameworks like Robot framework, Playwright, Selenium or similar. 3. Knowledge of Docker/Kubernetes
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
LEAD DATA ENGINEER Location: Hyderabad Role: Permanent Mode: WFO JOB RESPONSIBILITIES: Tracks the various Machine learning projects and their data needs. Tracks and improves Kanban process of product maintenance Drives complex technical discussions both within company and outside data partners Actively Contributes to the design of machine learning solutions by having a deep understanding of how the data is used and how new sources of data can be introduced Advocates for investments in tools and technologies to streamline data workflows and reduce technical debt Continuously explores and adopts emerging technologies and methodologies in data engineering and machine learning Develops and maintains scalable data pipelines to support machine learning models and analytics Collaborates with data scientists to ensure efficient data processing and model deployment Ensures data quality, integrity, and security across all stages of the data pipeline Implements monitoring and alerting systems to detect anomalies in data processing and model performance Enhances data versioning, data lineage, and reproducibility practices to improve model transparency and auditing . QUALIFICATION 5+ years of experience in data engineering or related fields, with a strong focus on building scalable data pipelines to support machine learning workflows. Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or other relevant fields. Specific experience in Kafka needed . Snowflake and data bricks would be huge plus. Proven expertise in designing, implementing, and maintaining large-scale, high-performance data architectures and ETL processes managing 1TB a day. Strong knowledge of database management systems (SQL and NoSQL), distributed data processing (e.g., Hadoop, Spark), and cloud platforms (AWS, GCP, Azure). Experience working closely with data scientists and machine learning engineers to optimize data flows for model training and real-time inference with latency requirements. Hands-on experience with data wrangling, data preprocessing, and feature engineering to ensure clean, high-quality data for machine learning models. Solid understanding of data governance, security protocols, and compliance requirements (e.g., GDPR, HIPAA) to ensure data privacy and integrity. Preferred Experience in data pipelines and analytics for video-game development Experience in Advertising industry Experience in online businesses where transactions happen without human intervention. Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.
These cities are known for their thriving tech industries and have a high demand for Kafka professionals.
The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.
Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.
In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture
As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2