Talent Velocity

4 Job openings at Talent Velocity
Big Data Architect pune,maharashtra 5 - 9 years INR Not disclosed On-site Full Time

As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, translating business requirements into scalable data models and integration workflows, as well as ensuring the high performance and availability of enterprise-grade data processing systems. In addition, you will play a crucial role in mentoring development teams and offering guidance on best practices and performance tuning. To excel in this position, you must possess architect-level experience with Big Data ecosystem and enterprise data solutions. Proficiency in Hadoop, PySpark, and distributed data processing frameworks is essential, along with strong hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools is also required. Your experience in performance optimization and handling large-scale data sets, coupled with excellent problem-solving, design, and analytical skills, will be highly valued. While not mandatory, exposure to cloud platforms like AWS, Azure, or GCP for data solutions would be a beneficial asset. Additionally, familiarity with data governance, data security, and metadata management is considered a good-to-have skill set for this role. Joining our team offers you the opportunity to work with cutting-edge Big Data technologies, gain leadership exposure, and directly participate in architectural decisions. This is a stable, full-time position within a top-tier tech team, offering a conducive work-life balance with a standard 5-day working schedule. If you are passionate about Big Data technologies and eager to contribute to innovative solutions, we welcome your application for this exciting opportunity.,

MERN Stack Developer - MongoDB/React.js Bengaluru,Karnataka,India 0 years None Not disclosed On-site Full Time

Immediate Joiners Only Job Overview We are seeking a skilled and experienced MERN Stack Developer with strong expertise in React.js (Frontend), Node.js (Backend), and AWS Cloud services. This role demands a balance of hands-on technical skills and system thinking to develop and deploy scalable, secure applications in a cloud-first environment. Key Responsibilities Design and implement responsive, modern UI using React.js (~60% frontend focus) Develop and maintain robust backend services using Node.js (~40% backend) Deploy, manage, and optimize applications on AWS Collaborate with cross-functional teams including product, QA, and DevOps Participate in architectural planning, code reviews, and technical discussions Troubleshoot production issues and maintain high system reliability Must-Have Skills Strong hands-on experience with React.js (Hooks, Context API, Redux, etc.) Solid knowledge of Node.js, Express.js, and REST API development Proficiency in AWS services (EC2, S3, Lambda, etc.) Understanding of modern application architecture and CI/CD workflows Nice-to-Have / Coachable Skills Familiarity with customer support tools (e.g., Intercom, Zendesk) Exposure to event-driven architectures and asynchronous processing Experience working in Agile teams and fast-paced environments (ref:hirist.tech)

Big Data Architect - Hadoop/PySpark pune,maharashtra 5 - 9 years INR Not disclosed On-site Full Time

As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, and transforming business requirements into scalable data models and integration workflows. It will be crucial for you to guarantee the high performance and availability of enterprise-grade data processing systems. Additionally, you will play a vital role in mentoring development teams and offering guidance on best practices and performance tuning. Your must-have skills for this position include architect-level experience with the Big Data ecosystem and enterprise data solutions, proficiency in Hadoop, PySpark, and distributed data processing frameworks, as well as hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools, along with experience in performance optimization and large-scale data handling will be essential. Your problem-solving, design, and analytical skills should be excellent. While not mandatory, it would be beneficial if you have exposure to cloud platforms such as AWS, Azure, or GCP for data solutions, and possess knowledge of data governance, data security, and metadata management. Joining our team will provide you with the opportunity to work on cutting-edge Big Data technologies, gain leadership exposure, and be directly involved in architectural decisions. This role offers stability as a full-time position within a top-tier tech team, ensuring a work-life balance with a 5-day working schedule. (ref:hirist.tech),

Confluent Kafka Engineer Pune,Maharashtra,India 5 - 8 years None Not disclosed On-site Full Time

Immediate Joiners Preferred Job Description : Nice Software is hiring a Confluent Kafka Engineer with strong expertise in Java, API Integration, Apache Kafka, and SQL. You will be responsible for building scalable, real-time data streaming solutions and integrating them with enterprise systems. This role offers a high-impact opportunity to work on mission-critical systems in a modern engineering environment. Key Responsibilities Develop, manage, and scale Kafka-based messaging solutions using Confluent Kafka. Build and enhance Java-based backend systems with Kafka integration. Work on seamless API integration between internal/external services. Write and optimize SQL queries for data manipulation and reporting. Monitor and troubleshoot Kafka performance issues; ensure high availability and reliability. Collaborate with cross-functional teams including DevOps, QA, and Product. Must-Have Skills 5-8 years of experience with solid hands-on knowledge of Apache Kafka / Confluent Kafka. Strong backend programming skills in Java. Proven experience in API integration (REST/SOAP). Good command of SQL for working with large datasets. Understanding of Kafka internals - brokers, producers, consumers, schema registry, and connectors. Nice To Have Exposure to cloud platforms (AWS/GCP/Azure). Familiarity with container tools like Docker/Kubernetes. Experience with tools like Kafka Streams, KSQL, Kafka Connect. Monitoring and observability using Prometheus, Grafana, etc. What We Offer Challenging projects in modern data architecture Learning-focused, collaborative work culture Opportunity to work with high-performing engineering teams (ref:hirist.tech)