Navi Mumbai, Maharashtra, India
Not disclosed
On-site
Full Time
Job Details: Position: Senior Data Engineer (Databricks+Azure) Experience: 8+ years Work Mode: Onsite Location: Navi Mumbai Notice Period: Immediate joiners only Note: Candidates only from Mumbai or Navi Mumbai will be considered. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks. The ideal candidate will have a deep understanding of streaming architectures, Medallion data models, and performance optimization techniques in cloud environments. This role requires hands-on technical expertise, including live coding during the interview process. Key Responsibilities • Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming. • Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers. • Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. • Work with large volumes of structured and unstructured data, ensuring high availability and performance. • Apply performance tuning techniques such as partitioning, caching, and cluster resource optimization. • Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. • Establish best practices for code versioning, deployment automation, and data governance. Required Technical Skills: • Strong expertise in Azure Databricks and Spark Structured Streaming • Processing modes (append, update, complete) • Output modes (append, complete, update) • Checkpointing and state management • Experience with Kafka integration for real-time data pipelines • Deep understanding of Medallion Architecture • Proficiency with Databricks Autoloader and schema evolution • Deep understanding of Unity Catalog and Foreign catalog • Strong knowledge of Spark SQL, Delta Lake, and DataFrames • Expertise in performance tuning (query optimization, cluster configuration, caching strategies) • Must have Data management strategies • Excellent with Governance and Access management • Strong with Data modelling, Data warehousing concepts, Databricks as a platform • Solid understanding of Window functions Proven experience in: • Merge/Upsert logic • Implementing SCD Type 1 and Type 2 • Handling CDC (Change Data Capture) scenarios • Retail/Telcom/Energy any one industry expertise • Real time use case execution • Data modelling Show more Show less
Pune, Maharashtra, India
Not disclosed
On-site
Full Time
Job Details: Position: Dot Net Full Stack Specialist / Lead Experience: 5+ years Work Mode: Onsite Location: Pune Notice Period: Immediate – 15 days Must Have: .Net, C#, SQL, Angular/AWS/AZURE Job Summary: • Strong organizational and project management skills.Net, C#, SQL, Angular • Proficiency with developing applications on the Cloud such as Azure, AWS, and GPC. • Proficiency with fundamental front end languages such as HTML, CSS, and JavaScript. • Familiarity with JavaScript frameworks such as Vue JS, Angular JS, and React • Familiarity with database technology such as SQL Server, PostgreSQL, MySQL, and MongoDB. • Excellent verbal communication skills. • Good problem-solving skills. Show more Show less
India
Not disclosed
Remote
Full Time
We’re looking for a Talent Acquisition Specialist to join our team on a 3-month contract and support our fast-moving hiring engine. This is an excellent opportunity for someone early in their career who’s looking to develop strong, foundational skills in recruitment while working in a fast-paced, global startup environment. The role comes with a fixed monthly compensation of ₹20,000–₹25,000 per month for the duration of the contract. What You’ll Work On 60 Cold Calls Daily / 300 Weekly — You’ll be speaking with a high volume of candidates, this is a fast-paced, outreach-heavy role. 20+ Qualified Candidate Submissions Per Day — Submissions must be accurate, relevant, and interview-ready. 2 Closures Per Week From Your Pipeline — Our hiring teams will rely on your sourcing to drive weekly conversions. Daily Reporting & Tracker Management — You’ll be expected to log calls, candidate progress, and funnel metrics clearly and consistently. What We’re Looking For Strong communication and phone skills — clear, confident, and able to build quick rapport with candidates Familiarity with Naukri, LinkedIn, and other sourcing platforms to find and engage relevant candidates Ability to spot red flags in resumes and conversations — job-hopping, buzzwords, unclear responsibilities, etc. Comfortable with daily performance tracking and reporting your own metrics without handholding Exposure to hiring, sales, placement cells, or cold calling — even if through internships or volunteer roles Familiarity with tools like Google Sheets, ATS, CRM platforms, or any basic candidate tracking system Self-driven, detail-focused, and committed to meeting targets without constant reminders Comfortable working in a remote, fast-paced, and outcome-driven environment What You’ll Gain Hands-on experience hiring across functions for a U.S. startup Real ownership, direct impact, and steep learning in a short time Potential for long-term opportunities based on performance This is a 3-month contract role, ideal for recent graduates or early-career professionals looking to break into talent acquisition. If you’re serious about building skill, putting in the reps, and working with a team that values performance — we’d love to hear from you. Applications will only be accepted through Wellfound — apply here: https://wellfound.com/l/2BoqTT Show more Show less
Ahmedabad, Gujarat, India
Not disclosed
Remote
Full Time
Location: Remote/Hybrid (India-based preferred) Type: Full-Time Must Haves (Don’t Apply If You Miss Any) 3+ years experience in Data Engineering Proven hands-on with ETL pipelines (end-to-end ownership) AWS Resources: Deep experience with EC2, Athena, Lambda, Step Functions (non-negotiable; critical to the role) Strong with MySQL (not negotiable) Docker (setup, deployment, troubleshooting) Good To Have (Adds Major Value) Airflow or any modern orchestration tool PySpark experience Python Ecosystem SQL Alchemy DuckDB PyArrow Pandas Numpy DLT (Data Load Tool). About You You’re a builder, not just a maintainer. You can work independently but communicate crisply. You thrive in fast-moving, startup environments. You care about ownership and impact, not just code. Include the Code word Red Panda in your message application, so that we know you have read this section. What You’ll Do Architect, build, and optimize robust data pipelines and workflows Own AWS resource configuration, optimization, and troubleshooting Collaborate with product and engineering teams to deliver business impact fast Automate and scale data processes—no manual work culture Shape the data foundation for real business decisions Cut to the chase. Only serious, relevant applicants will be considered. Show more Show less
My Connections Home.LLC
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.