Posted:2 months ago| Platform:
Work from Office
Full Time
About the Role: Grade Level (for internal use): 09 Job Role: ML Engineer The Role: S&P Global is looking for a Data Engineer who will be a part of our Data Science and Modelling team. The ideal candidate should be highly motivated and goal-oriented, with an encouraging attitude of working in a very dynamic work environment with a wide range of stakeholders and functional teams. Job Summary: As a Data Engineer at S&P Global, you will utilize your extensive technical skills to architect, build, and maintain our evolving data infrastructure, which is essential for supporting our advanced analytics and machine learning initiatives. You will work closely with various stakeholders to acquire, process, and refine vast datasets, focusing on creating scalable and optimized data pipelines. Your work will be pivotal for enabling data scientists to extract meaningful insights and develop AI-driven solutions that serve various business purposes, from internal decision-making to product development. Job Responsibilities: To collaborate with stakeholders, including data scientists, analysts, and other engineers, to understand and refine requirements related to data processing and transformation needs. To design, construct, install, and maintain large-scale processing systems and other infrastructure. To build high-performance algorithms, prototypes, and conceptual models and enable the efficient retrieval and analysis of data. To implement ETL processes to acquire, validate, and process incoming data from diverse sources. To ensure data architecture and model adhere to compliance, privacy, and security standards. To work in conjunction with data scientists to optimize data science and machine learning algorithms and models. To provide technical expertise in the resolution of data-related issues, including data quality, data lineage, and data processing errors. To manage the deployment of analytics solutions into production and maintain them. To maintain high-quality processes and deliver projects in collaborative Agile team environments. Requirements: 3+ years of programming experience particularly in Python, R, Java or C#. 1+ years of experience working with SQL or NoSQL databases. Experience working with Pyspark. University degree in Computer Science, Engineering, Mathematics, or related disciplines. Strong understanding of big data technologies such as Hadoop, Spark, or Kafka. Demonstrated ability to design and implement end-to-end scalable and performant data pipelines. Experience with workflow management platforms like Airflow. Strong analytical and problem-solving skills. Ability to collaborate and communicate effectively with both technical and non-technical stakeholders. Experience building solutions and working in the Agile working environment Experience working with git or other source control tools
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Financial Services
approximately 20,000 Employees
627 Jobs
Key People
14.0 - 15.0 Lacs P.A.
10.0 - 20.0 Lacs P.A.
Chennai, Tamil Nadu, India
Experience: Not specified
Salary: Not disclosed
3.0 - 6.0 Lacs P.A.
Bengaluru, Karnataka, India
Salary: Not disclosed
Hyderabad, Pune, Ahmedabad
4.0 - 9.0 Lacs P.A.
Bengaluru
6.0 - 10.0 Lacs P.A.
Bengaluru
25.0 - 30.0 Lacs P.A.
Bengaluru
20.0 - 22.5 Lacs P.A.
4.0 - 9.0 Lacs P.A.