Hybrid
Full Time
A Day in the Life Were a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Digital Engineer. We are working to improve how healthcare addresses the needs of more people, in more ways and in more places around the world. As a PySpark Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines using PySpark. You will work closely with data scientists, analysts, and other stakeholders to ensure the efficient processing and analysis of large datasets, while handling complex transformations and aggregations. Responsibilities may include the following and other duties may be assigned: Design, develop, and maintain scalable and efficient ETL pipelines using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Collaborate with data scientists and analysts to understand data requirements, review Business Requirement documents and deliver high-quality datasets. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. Stay up to date with the latest trends and technologies in big data and distributed computing. Required Knowledge and Experience: Bachelors degree in computer science, Engineering, or a related field. 4-5 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google BigQuery. Familiarity with data lake architectures and data storage solutions. Experience with big data technologies such as Hadoop, Hive, and Kafka. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Preferred Skills: Experience with Databricks. Experience with orchestration tools like Apache Airflow or AWS Step Functions. Knowledge of machine learning workflows and experience working with data scientists. Understanding of data security and governance best practices. Familiarity with streaming data platforms and real-time data processing. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. If interested, please share your updated CV on ashwini.ukekar@medtronic.com Regards, Ashwin Ukekar Sourcing Specialist Medtronic
Medtronic
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Medtronic
Chennai
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.
Hyderabad, Chennai, Bengaluru
9.5 - 15.0 Lacs P.A.
Bengaluru
7.0 - 17.0 Lacs P.A.
Hyderabad
15.0 - 30.0 Lacs P.A.
Pune
15.0 - 30.0 Lacs P.A.
Chennai, Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
10.0 - 19.0 Lacs P.A.
Hyderābād
2.51046 - 7.5 Lacs P.A.