Posted:1 day ago|
Platform:
Work from Office
Full Time
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon About Tredence Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure or GCP . As a Data Engineer at Tredence , you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure or GCP . Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.) Required Skills Azure Databricks / GCP Python SQL Pyspark
Tredence
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowBusiness Consulting and Services
1001-5000 Employees
80 Jobs
Key People
Bengaluru
Experience: Not specified
5.0 - 15.0 Lacs P.A.
Chennai, Tamil Nadu, India
Experience: Not specified
Salary: Not disclosed
Bengaluru, Karnataka, India
Experience: Not specified
Salary: Not disclosed
Delhi Cantonment, Delhi, India
Experience: Not specified
Salary: Not disclosed
Pune, Maharashtra, India
Salary: Not disclosed
Bengaluru, Karnataka, India
Salary: Not disclosed
Bengaluru East, Karnataka, India
Salary: Not disclosed
Gurugram
3.0 - 5.0 Lacs P.A.
Gurgaon, Haryana, India
Experience: Not specified
Salary: Not disclosed
Hyderabad
3.0 - 8.0 Lacs P.A.