Associate Data/API Engineer

2 - 5 years

15 - 25 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary:

The Associate Data Engineer works on different projects of data engineering to support the use cases, data ingestion pipeline and identify potential process or data quality issues. For the SaaS Integration team, the resource will contribute to the design, development, and maintenance of API integrations across enterprise SaaS platforms including Reltio, OneTrust, and Adobe CDP. This will include developing and maintaining RESTful APIs to integrate internal systems with Reltio (MDM), OneTrust (Privacy & Compliance), and Adobe CDP (Customer Data Platform), participating in technical design reviews and contributing to solution architecture discussions and collaborating with product managers, data engineers, and compliance teams to gather requirements and deliver API features. The role also supports marketing analytic teams with analytical tools that enable analytics and business communities to do their job easier, faster and smarter. The engineer brings together data from different internal & external partners and builds a curated Marketing analytics focused data & tools ecosystem. The Associate Data Engineer plays a crucial role in building this ecosystem depending on the Marketing analytics communities need.


Minimum Qualifications

Bachelor’s Degree in computer science or engineering; or equivalent technical training and experience.

• 0-1 years of experience in Data & Analytics


Preferred Qualifications

• 2+ years of experience in Data & Analytics

• Experience in API development using Java, Node.js, or Python.

• Familiarity with SaaS platforms such as Reltio & OneTrust

• Experience with API management tools (e.g., Apigee, Azure API Management).

• Understanding of RESTful design principles, OAuth2, OpenAPI/Swagger specifications.

• Exposure to cloud-native development and CI/CD pipelines.

• Exposure with Postman for API testing and automation.

• Proficient in JUnit for unit testing and integration testing of APIs

Roles and Responsibilities

Essential Job Functions:

Collaborates with internal/external stakeholders to manage data logistics – including data specifications, transfers, structures, and rules. Collaborates with business users, business analysts and technical architects in transforming business requirements into analytical workbenches, tools and dashboards reflecting usability best practices and current design trends- (15%)

• Access, extract, and transform credit and retail data from a variety of sources of all sizes (including client marketing databases, 2ndand 3rdparty data) using Hadoop, Spark, SQL, Big data technologies etc. Provides automation help to analytical teams around data centric needs using orchestration tools, SQL and possibly other big data/cloud solutions for efficiency improvement. - (15%)

• Support Data Engineer and Sr Data Engineer in new analytical proof of concepts and tool exploration projects. Effectively manage time and resources in order to deliver on time/correctly on concurrent projects. Involved in creating POCs to ingest and process streaming data using Spark and HDFS. - (10%)

• Answer and trouble shoot questions about data sets and analytical tools; Develop, maintain and enhance new and existing analytics tools to support internal customers. Ingest data from files, streams and databases then process the data with Python and Pyspark in order to store data to Hive or NoSQL database. - (10%)

• Manage data coming from different sources and involved in HDFS maintenance and loading of structured and unstructured data. Apply knowledge in Agile Scrum methodology that leverages the Client Bigdata platform and used version control tool Git. Import and export data using Sqoop from HDFS to RDBMS and vice-versa. - (10%)

• Demonstrate an understanding of Hadoop Architecture and underlying Hadoop framework including Storage Management. Create POCs to ingest and process streaming data using Spark and HDFS. Work on back-end using Scala, Python and Spark to perform several aggregation logics. - (10%)

• Assist in building and maintaining data pipelines to ensure reliable data delivery across various business units. Help manage and maintain existing ETL and ELT processes, and assist in developing new data integration workflows. - (10%)

• Assist in documenting data processes and adhering to best practices and technical standards in data engineering. Assist in creating and optimizing data models and visualizations to support business insights and reporting needs. - (10%)

• Proficient in writing SQL Queries and database analysis for good performance. Experience working with python or Scala, Spark, Hadoop, Hive, Oozie, Sqoop, HDFS, Impala, Shell Scripts, Microsoft Azure Services like ADLS/Blob Storage solutions, Azure DataFactory, Azure Functions and Databricks. Utilizes basic knowledge of Rest API for designing networked applications. - (10%)

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Java Skills

Practice Java coding challenges to boost your skills

Start Practicing Java Now

RecommendedJobs for You

hyderabad, gurugram, mumbai (all areas)

hyderabad, bengaluru