3 - 6 years
35 - 40 Lacs
Posted:10 hours ago|
Platform:
Work from Office
Full Time
Client : Our client is a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. Requirements : Our client is looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities : - Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. - Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. - Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. - Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. - Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. - Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. - Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile : - Bachelors or Masters degree in Computer Science, Information Technology, or a related field. - 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. - Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. - Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). - Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. - Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). - Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. - Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). - Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). - Familiarity with version control systems like Git.
Talent Acceleration Corridor
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Pune, Bengaluru
35.0 - 40.0 Lacs P.A.
Kolkata, Gurugram, Bengaluru
9.5 - 19.5 Lacs P.A.
Hyderabad
16.0 - 22.5 Lacs P.A.
Hyderabad, Chennai, Bengaluru
7.0 - 17.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
9.0 - 18.0 Lacs P.A.
Bengaluru
13.0 - 17.0 Lacs P.A.
Bengaluru
30.0 - 35.0 Lacs P.A.
Bengaluru
12.0 - 16.0 Lacs P.A.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
25.0 - 30.0 Lacs P.A.