Source and screen candidates through job portals, databases, referrals, and networking . Handle end-to-end recruitment : sourcing, screening, interviewing, and onboarding. Coordinate with hiring managers to understand job requirements and create effective job postings. Conduct telephonic and face-to-face interviews , assessing candidates skills and suitability. Manage the selection process , including interview scheduling, feedback, and offer negotiation. Maintain and update the recruitment database and reports . Build and nurture a strong candidate pipeline for current and future requirements. Ensure a smooth joining and onboarding process for new hires. Stay updated with market trends and salary benchmarks . Meet daily, weekly, and monthly hiring targets .
Roles and Responsibilities: Develop and maintain batch data pipelines using Apache Spark and Databricks notebooks Build streaming applications with Spark Streaming and Kafka Implement Kafka Streams for near real-time processing in Scala-based microservices Design and orchestrate ETL workflows using Azure Data Factory (ADF) Write optimized SQL queries for data transformation and reporting Develop backend services in Scala and Java , following clean architecture principles Use Git for version control and Maven/Gradle for build automation Monitor and troubleshoot systems using Grafana and Grafana Loki Integrate with Collibra for metadata management and governance (Nice to have) Build frontend components using JavaScript and React Preferred candidate profile 36 years of experience in full stack or data engineering roles Strong hands-on experience with Scala , Spark , and Kafka Familiarity with Databricks and ADF in cloud environments (preferably Azure) Exposure to metadata tools like Collibra Bonus: Experience with React and JavaScript for frontend development Excellent problem-solving, debugging, and communication skills