Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.
Posted 6 days ago
7.0 - 12.0 years
20 - 35 Lacs
Pune
Hybrid
Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience
Posted 1 month ago
7.0 - 12.0 years
25 - 30 Lacs
Hyderabad, Bengaluru
Hybrid
Cloud Data Engineer The Cloud Data Engineer will be responsible for developing the data lake platform and all applications on Azure cloud. Proficiency in data engineering, data modeling, SQL, and Python programming is essential. The Data Engineer will provide design and development solutions for applications in the cloud. Essential Job Functions: Understand requirements and collaborate with the team to design and deliver projects. Design and implement data lake house projects within Azure. Develop application lifecycle utilizing Microsoft Azure technologies. Participate in design, planning, and necessary documentation. Engage in Agile ceremonies including daily standups, scrum, retrospectives, demos, and code reviews. Hands-on experience with Python/SQL development and Azure data pipelines. Collaborate with the team to develop and deliver cross-functional products. Key Skills: a. Data Engineering and SQL b. Python c. PySpark d. Azure Data Lake and ADF e. Databricks f. CI/CD g. Strong communication Other Responsibilities: Document and maintain project artifacts. Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. Complete training as required for Privacy, Code of Conduct, etc. Promptly report any known or suspected loss, theft, or unauthorized disclosure or use of PI to the General Counsel/Chief Compliance Officer or Chief Information Officer. Adhere to the company's compliance program. Safeguard the company's intellectual property, information, and assets. Other duties as assigned. Minimum Qualifications and Job Requirements: Bachelor's degree in Computer Science. 7 years of hands-on experience in designing and developing distributed data pipelines. 5 years of hands-on experience in Azure data service technologies. 5 years of hands-on experience in Python, SQL, Object-oriented programming, ETL, and unit testing. Experience with data integration with APIs, Web services, Queues. Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, Confluence. *Required: Azure data engineering associate and databricks data engineering certification
Posted 1 month ago
8.0 - 12.0 years
20 - 25 Lacs
Hyderabad, Pune
Hybrid
Job Title : Data Engineer Work Location : India, Pune / Hyderabad (Hybrid) Responsibilities include: Design, implement, and optimize end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop data pipelines to extract and transform data in near real time using cloud native technologies. Implement data validation and quality checks to ensure accuracy and consistency. Monitor system performance, troubleshoot issues, and implement optimizations to enhance reliability and ePiciency. Collaborate with business users, analysts, and other stakeholders to understand data requirements and deliver tailored solutions. Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation. Provide technical guidance and support to team members and stakeholders as needed. Desirable Competencies: 8+ years of work experience Proficiency in writing complex SQL queries on MPP systems (Snowflake/Redshift) Experience in Databricks and Delta tables. Data Engineering experience with Spark/Scala/Python Experience in Microsoft Azure stack (Azure Storage Accounts, Data Factory and Databricks). Experience in Azure DevOps and CI/CD pipelines. Working knowledge of Python Feel comfortable participating in 2-week sprint development cycles. About Us Founded in 1956, Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Today, Williams-Sonoma, Inc. is one of the United States' largest e-commerce retailers with some of the best known and most beloved brands in home furnishings. Our family of brands are Williams-Sonoma, Pottery Barn, Pottery Barn Kids, Pottery Barn Teens, West Elm, Williams-Sonoma Home, Rejuvenation, GreenRow and Mark and Graham. We currently operate retail stores globally. Our products are also available to customers through our catalogues and online worldwide. Williams-Sonoma has established a technology center in Pune, India to enhance its global operations. The India Technology Center serves as a critical hub for innovation and focuses on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. By integrating advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough