Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 6 days ago
3 - 8 years
11 - 16 Lacs
Bengaluru
Work from Office
Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL : TSQL/Spark SQL/HiveQL Storage : Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines : ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL : Cosmos, Mongo, Cassandra Cubes : SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server : TSQL, Stored Procedures Hadoop : HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog : Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment.
Posted 2 months ago
8 - 13 years
20 - 35 Lacs
Delhi NCR, Gurgaon
Work from Office
Looking for a candidate as M. / Sr. M. in Data Engineering for a Aviation Company based out of Gurgaon Hands-on software/data engineering exp. in building large scalable and reliable enterprise technology platforms Interested candidate revert back Required Candidate profile *Excellent programming skills in Python, creation of responsive dashboards, data mining, handle large data sets. *Hands-on with one of the programming languages Scala/python.
Posted 2 months ago
2 - 6 years
3 - 6 Lacs
Maharashtra
Work from Office
Design, develop, and optimize scalable, high performance Spark applications using Scala. Work on mission critical projects, ensuring high availability, reliability, and performance. Analyze and optimize Spark jobs for efficient data processing and resource utilization. Collaborate with cross functional teams to deliver robust, production ready solutions. Troubleshoot and resolve complex issues related to Spark applications and data pipelines. Integrate Spark applications with Kafka for real time data streaming and MongoDB for data storage and retrieval. Follow best practices in coding, testing, and deployment to ensure high quality deliverables. Mentor junior team members and provide technical leadership. Mandatory Skills and Qualifications: 7+ years of hands on experience in Scala programming and Apache Spark. Strong expertise in Spark architecture, including RDDs, DataFrames, and Spark SQL. Proven experience in performance tuning and optimization of Spark applications. Must have hands on experience with Spark Streaming for real time data processing. Solid understanding of distributed computing and big data processing concepts. Proficient in Linux with the ability to work in a Linux environment. Strong knowledge of data structures and algorithms, with a focus on space and time complexity analysis. Ability to work independently and deliver results in a fast paced, high pressure environment. Excellent problem solving, debugging, and analytical skills. Good to Have Skills: Experience with Apache Kafka for real time data streaming. Knowledge of MongoDB or other NoSQL databases. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization (e.g., Docker, Kubernetes). Understanding of DevOps practices and CI/CD pipelines. Interview Focus Areas: Coding Exercise in Scala: A hands on coding assessment to evaluate your problem solving and coding skills. Spark Integration with Other Technologies: Practical understanding of how Spark integrates with tools like Kafka, MongoDB, etc. Spark Streaming: Demonstrated experience with real time data processing using Spark Streaming. Best Practices and Optimization in Spark: In depth knowledge of Spark job optimization, resource management, and performance tuning. Data Structures, Space, and Time Complexity Analysis: Strong grasp of data structures and algorithms, with a focus on optimizing space and time complexity. Shift Requirements: Flexible shift hours with the shift ending by midday US time. Willingness to adapt to dynamic project needs and timelines.
Posted 3 months ago
4 - 8 years
7 - 17 Lacs
Bengaluru
Work from Office
In this role, you will: Lead moderately complex initiatives within Technology and contribute to large scale data processing framework initiatives related to enterprise strategy deliverables Build and maintain optimized and highly available data pipelines that facilitate deeper analysis and reporting Review and analyze moderately complex business, operational or technical challenges that require an in-depth evaluation of variable factors Oversee the data integration work, including developing a data model, maintaining a data warehouse and analytics environment, and writing scripts for data integration and analysis Resolve moderately complex issues and lead teams to meet data engineering deliverables while leveraging solid understanding of data information policies, procedures and compliance requirements Collaborate and consult with colleagues and managers to resolve data engineering issues and achieve strategic goals Required Qualifications: 4+ years of Data Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: A bachelors degree or higher in computer science 4+ years of software engineering experience Default Required 4+ years of working experience in Data Ingestion, Data Curation, Data Extraction 4+ years of experience working on Basic spark (Spark architecture, Spark JDBC) and SQL Server 3+ years of experience with SQL & NOSQL database integration with Spark (MS SQL server and MongoDB) Working experience in Unix, GitHub (any version mgmt), Autosys, DevOps Working experience working on performance optimization in SQL Server and Spark 2+ years of Agile experience Job Expectations: Understanding of distributed systems (CAP theorem, partition and bucketing, replication memory layouts, consistency) Understanding of cloud-ready enterprise solutions in one or a combination of the following: Amazon Web Services (AWS), Google Cloud Platform (GCP) or Pivotal Cloud Foundry (PCF) Knowledge of Python, Apache Kafka, or Confluent Enterprise Ability to work effectively, as well as independently, in a team environment. Posting End Date: 25 Feb 2025 *Job posting may come down early due to volume of applicants. We Value Diversity At Wells Fargo, we believe in diversity, equity and inclusion in the workplace; accordingly, we welcome applications for employment from all qualified candidates, regardless of race, color, gender, national origin, religion, age, sexual orientation, gender identity, gender expression, genetic information, individuals with disabilities, pregnancy, marital status, status as a protected veteran or any other status protected by applicable law. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business units risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in US: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2